Bash-my-AWS is a simple but powerful set of CLI commands for managingresources on Amazon Web Services.
They harness the power of Amazon's AWSCLI, while abstracting away verbosity.
The project implements some innovative patterns but (arguably) remains simple,beautiful and readable.
Note: Extensive documentation at https://bash-my-aws.org/
There are two main types of commands.
1. Resource Listing Commands
These generally consist of the pluralised form of the resource name.
$ buckets
example-assets 2019-12-08 02:35:44.758551
example-logs 2019-12-08 02:35:52.669771
example-backups 2019-12-08 02:35:56.579434
$ stacks
nagios CREATE_COMPLETE 2011-05-23T15:47:44Z NEVER_UPDATED NOT_NESTED
postgres01 CREATE_COMPLETE 2011-05-23T15:47:44Z NEVER_UPDATED NOT_NESTED
postgres02 CREATE_COMPLETE 2011-05-23T15:47:44Z NEVER_UPDATED NOT_NESTED
prometheus CREATE_COMPLETE 2011-05-23T15:47:44Z NEVER_UPDATED NOT_NESTED
$ keypairs
alice 8f:85:9a:1e:6c:76:29:34:37:45����7f:8d:f9:70:eb
bob 56:73:29:c2����7b:6f:b6:f2:f3:b4����e4:2b:12:d4
carol 29:4e:1c:cb����d4:85:0e:4f:b6:34:4c:d4:79:32:00
2. Resource detail/action commands
These generally consist of a resource name and action separated by a hyphen.This makes discovering them via shell completion simple.
Some retrieve information about resources while others make changes to them.
$ keypair-delete alice bob
You are about to delete the following EC2 SSH KeyPairs:
alice
bob
Are you sure you want to continue? y
See the Command Reference for a full list of commands.
In the example above, shell autocompletion retrieved the existing EC2 Keypairnames (alice
, bob
) from AWS. This helps avoid the need to rely on humanmemory or terminal copypasta.
Additionally, all of the bash-my-aws commands are available as completionsfor the bma
command, so bma[tab][tab]
will produce a list of all of theavailable commands.
The commands themselves are line oriented and work nicely in unix pipelineswith other unix commands (e.g. grep
, awk
, etc).
$ stacks | grep postgres
postgres01 CREATE_COMPLETE 2011-05-23T15:47:44Z NEVER_UPDATED NOT_NESTED
postgres02 CREATE_COMPLETE 2011-05-23T15:47:44Z NEVER_UPDATED NOT_NESTED
They also work incredibly well with each other due to the way they treat inputfrom STDIN. The first token from each line of STDIN is taken to be a resourceidentifier (and the rest is discarded).
$ stacks | grep postgres | stack-delete
You are about to delete the following stacks:
postgres01
postgres02
Are you sure you want to continue? y
Some users have compared this User Experience to functionality in Windows Powershell.
Bash-my-AWS is insanely simple to pick up and start using but contains a lot ofconvenient shortcuts you can make use of.
Example: resource listing commands accept a filter argument, removing the needfor | grep
.
In the following example someone has given a CloudFormation stack a really long name:
$ stacks
nagios CREATE_COMPLETE 2011-05-23T15:47:44Z NEVER_UPDATED NOT_NESTED
postgres01 DELETE_COMPLETE 2011-05-23T15:47:44Z NEVER_UPDATED NOT_NESTED
postgres02 DELETE_COMPLETE 2011-05-23T15:47:44Z NEVER_UPDATED NOT_NESTED
prometheus CREATE_COMPLETE 2011-05-23T15:47:44Z NEVER_UPDATED NOT_NESTED
stack-with-a-annoyingly-long-name CREATE_COMPLETE 2011-05-23T15:47:44Z NEVER_UPDATED NOT_NESTED
This affects the output when we look at our Postgres stacks:
$ stacks | grep postgres
postgres01 DELETE_COMPLETE 2011-05-23T15:47:44Z NEVER_UPDATED NOT_NESTED
postgres02 DELETE_COMPLETE 2011-05-23T15:47:44Z NEVER_UPDATED NOT_NESTED
The resource listing command can filter output before applying column
.
$ stacks postgres
postgres01 DELETE_COMPLETE 2011-05-23T15:47:44Z NEVER_UPDATED NOT_NESTED
postgres02 DELETE_COMPLETE 2011-05-23T15:47:44Z NEVER_UPDATED NOT_NESTED
As shown below, you may simply clone the GitHub repo and source the files required.(You should probably fork it instead to keep your customisations)
$ git clone https://github.com/bash-my-aws/bash-my-aws.git ~/.bash-my-aws
Put the following in your shell's startup file:
export PATH="$PATH:$HOME/.bash-my-aws/bin"
source ~/.bash-my-aws/aliases
# For ZSH users, uncomment the following two lines:
# autoload -U +X compinit && compinit
# autoload -U +X bashcompinit && bashcompinit
source ~/.bash-my-aws/bash_completion.sh
Bash-my-AWS
began as a collection of bash functions, sourced into your shell.More recently, the default suggestion has been to load aliases that execute asmall wrapper script that loads the functions and executes the desired function.
After years of zsh
users asking for support, one stepped up and identifieda changes that would eliminate any shell compatibility problems without compromisingthe functionaility, simplicity and discoverability of the project. Massive thanksto @ninth-dev for this.
# bash users may source the functions instead of loading the aliases
if [ -d ${HOME}/.bash-my-aws ]; then
for f in ~/.bash-my-aws/lib/*-functions; do source $f; done
fi
The default way to run the commands is using the aliases:
$ instances
i-e6f097f6ea4457757 ami-123456789012 t3.nano running example-ec2-ap-southeast-2 2019-12-07T08:12:00.000Z ap-southeast-2a None
i-b983805b4b254f749 ami-123456789012 t3.nano running postfix-prod 2019-12-07T08:26:30.000Z ap-southeast-2a None
i-fed39ebe7204dfd37 ami-123456789012 t3.nano running postfix-prod 2019-12-07T08:26:34.000Z ap-southeast-2a None
i-47955eb46d98b4dd8 ami-123456789012 t3.nano running prometheus 2019-12-07T08:27:02.000Z ap-southeast-2a None
i-8d25b78d40d17f38a ami-123456789012 t3.nano running plex-server 2019-12-07T08:27:38.000Z ap-southeast-2a None
It's also possible to run them using the bma
wrapper.(This is sometimes required when using a restrictive auth tool.)
$ bma instances
i-e6f097f6ea4457757 ami-123456789012 t3.nano running example-ec2-ap-southeast-2 2019-12-07T08:12:00.000Z ap-southeast-2a None
i-b983805b4b254f749 ami-123456789012 t3.nano running postfix-prod 2019-12-07T08:26:30.000Z ap-southeast-2a None
i-fed39ebe7204dfd37 ami-123456789012 t3.nano running postfix-prod 2019-12-07T08:26:34.000Z ap-southeast-2a None
i-47955eb46d98b4dd8 ami-123456789012 t3.nano running prometheus 2019-12-07T08:27:02.000Z ap-southeast-2a None
i-8d25b78d40d17f38a ami-123456789012 t3.nano running plex-server 2019-12-07T08:27:38.000Z ap-southeast-2a None
For each resource type, there is a command to list them:
$ instances
i-e6f097f6ea4457757 ami-123456789012 t3.nano running example-ec2-ap-southeast-2 2019-12-07T08:12:00.000Z ap-southeast-2a None
i-b983805b4b254f749 ami-123456789012 t3.nano running postfix-prod 2019-12-07T08:26:30.000Z ap-southeast-2a None
i-fed39ebe7204dfd37 ami-123456789012 t3.nano running postfix-prod 2019-12-07T08:26:34.000Z ap-southeast-2a None
i-47955eb46d98b4dd8 ami-123456789012 t3.nano running prometheus 2019-12-07T08:27:02.000Z ap-southeast-2a None
i-8d25b78d40d17f38a ami-123456789012 t3.nano running plex-server 2019-12-07T08:27:38.000Z ap-southeast-2a None
and a number of commands to act on these resources:
$ instance-[TAB][TAB]
instance-asg instance-ssh-details instance-termination-protection
instance-az instance-stack instance-termination-protection-disable
instance-console instance-start instance-termination-protection-enable
instance-dns instance-state instance-type
instance-iam-profile instance-stop instance-userdata
instance-ip instance-tags instance-volumes
instance-ssh instance-terminate instance-vpc
Whether you're new to the tools or just have a bad memory, bash completionmakes discovering these commands simple.
See the Command Reference for a full list of commands and usage examples.
This is where the magic happens!
The first token on each line is almost always a resource identifier. When you pipe outputbetween the commands they just take the first token from each line.
$ instances | grep postfix | instance-ip
i-b983805b4b254f749 10.190.1.70 54.214.71.51
i-fed39ebe7204dfd37 10.135.204.82 54.214.26.190
!!! NoteMost commands that list resources (stacks
, instances
, etc) acceptfilter term as first arg. As well as reducing keystrokes, it can alsoimprove output as columnisation is done after filtering.
$ instances postfix | instance-ip
i-b983805b4b254f749 10.190.1.70 54.214.71.51
i-fed39ebe7204dfd37 10.135.204.82 54.214.26.190
For those interested in how it works:
For a quick look at how a command works, you can use bma type
:
$ bma type instances
instances is a function
instances ()
{
local instance_ids=$(__bma_read_inputs);
local filters=$(__bma_read_filters $@);
aws ec2 describe-instances $([[ -n ${instance_ids} ]] && echo --instance-ids ${instance_ids}) --query "
Reservations[].Instances[][
InstanceId,
ImageId,
InstanceType,
State.Name,
[Tags[?Key=='Name'].Value][0][0],
LaunchTime,
Placement.AvailabilityZone,
VpcId
]" --output text | grep -E -- "$filters" | LC_ALL=C sort -b -k 6 | column -s' ' -t
}
A prettier version can be found in the source code:
# ~/.bash-my-aws/lib/instance-functions
instances() {
local instance_ids=$(__bma_read_inputs)
local filters=$(__bma_read_filters $@)
aws ec2 describe-instances \
$([[ -n ${instance_ids} ]] && echo --instance-ids ${instance_ids}) \
--query "
Reservations[].Instances[][
InstanceId,
ImageId,
InstanceType,
State.Name,
[Tags[?Key=='Name'].Value][0][0],
LaunchTime,
Placement.AvailabilityZone,
VpcId
]" \
--output text |
grep -E -- "$filters" |
LC_ALL=C sort -b -k 6 |
column -s$'\t' -t
}
For more info on AWSCLI query syntax, check out http://jmespath.org/tutorial.html
AWS官方文档: http://docs.amazonaws.cn/cli/latest/userguide/using-s3-commands.html 管理存储桶 创建桶; $ aws s3 mb s3://bucket-name 删除桶: $ aws s3 rb s3://bucket-name 删除非空桶: $ aws s3 rb s3://bucket-name --force
您可能会用到的AWS CLI Shell脚本。主要功能有根据名字/类别管理EC2、RDS资源,创建资源时自动添加标签,创建EC2实例时可附加卷、定义用户数据文件。可用-h或--help查看支持的命令和参数,GitHub源码。 支持的命令: EC2 Usage: ./ec2.sh [command] [args ...] Commands: create-image [instance_name
aws s3 ls aws s3 mb s3://nidaye aws s3 rm s3://nidaye aws s3 cp d:/test/av.file s3://nidaye S3权限: 为某个object开权限 在该object页面,打开Permission tab,在Public access section,选择Everyone 弹出右手边pane,选中Read obje
从EC2拷贝整个文件夹到S3: aws s3 cp local_path s3://bucket_name --recursive 拷贝单个文件: aws s3 cp localpath s3://bucket/path
要查看AWS S3存储桶(Bucket)的目录,您可以通过AWS管理控制台或AWS CLI(命令行界面)来实现。 在AWS管理控制台中查看: 登录AWS管理控制台。 选择S3服务。 在S3存储桶列表中选择要查看的存储桶。 在对象列表中,您可以查看所有对象以及它们的大小、最后修改时间等详细信息。 在AWS CLI中查看: 首先,打开命令行终端(Windows上可以使用CMD或PowerShell,M
假设aws命令行已经安装 可以使用aws –version来确认。如果没有安装,可以参考官方文档:http://docs.aws.amazon.com/cli/latest/userguide/installing.html 在aws的网站后台定义一个IAM用户和密钥 该账号必须有访问s3的权限。 先执行aws configure $ aws configure AWS Access Key ID
在CentOS下安装s3cmd命令 s3cmd命令是可以操作AWS S3的非常强大的工具,不仅可以下载和上传文件,还有创建目录等功能。 s3cmd的使用场景是非常丰富的,比如把本地的日志文件备份到S3时,就可以使用s3cmd和cron的组合,进行定期的备份。 比如日志文件的保存期间是365天时,使用s3cmd从S3删除指定的目录等等。 在这里介绍一下CentOS下安装s3cmd命令的步骤。 安装步
https://docs.amazonaws.cn/cli/latest/userguide/aws-cli.pdf#cli-chap-install https://www.cnblogs.com/Jimc/p/10218387.html 备注:如果Python -V和pip -V无法查看版本时,可尝试加软链 ln -s /usr/local/python3/bin/python3 /usr/b
$ aws s3 mb s3://bucket-name 删除桶: $ aws s3 rb s3://bucket-name 删除非空桶: $ aws s3 rb s3://bucket-name --force 列出存储桶 $ aws s3 ls 列出存储桶中所有的对象和文件夹 $ aws s3 ls s3://bucket-name 列出桶中 bucket-name/My
## AWS S3 # 查看S3文件/文件夹 aws ls s3://<bucket>/<folder>/... [--summary --human-readable] # 删除S3文件 aws s3 rm <s3 path> # 本地文件上传到S3 aws s3 cp <local path> <s3 path> # S3文件下载到本地 aws s3 cp <s3 path> <loca
官方安装文档: 在 macOS 上安装、更新和卸载 Amazon CLI 版本 1 - Amazon Command Line Interface 主要命令行 // 将s3 文件同步到本地文件夹,注意最后本地文件夹要带/ aws s3 sync s3://marvin /Users/marvin/files/s3/ // 将本地文件同步到s3某个路径,注意s3最后路径要带/ aws s3 sync
描述 (Description) 此函数声明LIST中的变量在封闭块中具有词法范围。 如果指定了多个变量,则所有变量都必须括在括号中。 语法 (Syntax) 以下是此函数的简单语法 - my LIST 返回值 (Return Value) 此函数不返回任何值。 例子 (Example) 以下是显示其基本用法的示例代码 - #!/usr/bin/perl -w my $string = "We
本程序最初是自己在装修房子的时候需要收集装修的效果图,找来作参考,开发了这个程序用来收集图片。 功能 创建、编辑、删除相册。 创建、编辑、删除标签。 一次增加多个图片。 加入剪切板中的图片。 拷贝选中的图片到剪切板中。 通过拖拽图片进行打标签,标签可以是多个。 编辑图片的属性(标题和描述)。 收藏好的图片。 以幻灯片的方式来浏览相册。 主界面 增加图片 编辑图片属性 收藏的图片 相册、标签和收藏之间的快捷跳转
My Mind 是一个基于 Web 的思维导图绘制工具,免费而且完全开源。My Mind 大多数采用键盘进行操作。 在线演示:http://my-mind.github.io/
Configurations Every engineer's workstation configuration (dotfiles) is highly variable and tailored to their desires, habits, and software stack. I love rebuilding and tinkering with my build by lear
My Blog 坚持不易,各位朋友如果觉得项目还不错的话可以给项目一个 star 吧,也是对我一直更新代码的一种鼓励啦,谢谢各位的支持。 你可以拿它作为博客模板,因为 My Blog 界面十分美观简洁,满足私人博客的一切要求; 你也可以把它作为 SpringBoot 技术栈的学习项目,My Blog也足够符合要求,且代码和功能完备; 内置三套博客主题模板,主题风格各有千秋,满足大家的选择空间,后续
98% OSS 1% free-as-in-beer closed source software 1% in-browser tools Please don't submit to Reddit, HN, or post this on Twitter. Share, but share with close friends! Table of Contents Tools by Catego