550+ DevOps Shell Scripts and Advanced Bash environment.
Fast, Advanced Systems Engineering, Automation, APIs, shorter CLIs, etc.
Heavily used in many GitHub repos, dozens of DockerHub builds (Dockerfiles) and 400+ CI builds.
/path/endpoint
.bashrc
+ .bash.d/*.sh
- aliases, functions, colouring, dynamic Git & shell behaviour enhancements, automatic pathing for installations and major languages like Python, Perl, Ruby, NodeJS, Golang across Linux distributions and Mac. See .bash.d/README.mdsetup/
- contains even more scripts to download and install software, JDBC connectors, Mac OS X settings etc..bash.d/
- interactive librarylib/
- scripting and CI librarySee Also: similar DevOps repos in other languages
Hari Sekhon
Cloud & Big Data Contractor, United Kingdom
(ex-Cloudera, former Hortonworks Consultant)
To bootstrap, install packages and link in to your shell profile to inherit all configs, do:
curl -L https://git.io/bash-bootstrap | sh
.bashrc
/.bash_profile
to automatically inherit all .bash.d/*.sh
environment enhancements for all technologies (see Inventory below).*
config dotfiles to $HOME
for git, vim, top, htop, screen, tmux, editorconfig, Ansible, PostgreSQL .psqlrc
etc. (only when they don't already exist so there is no conflict with your own configs)To only install package dependencies to run scripts, simply cd
to the git clone directory and run make
:
git clone https://github.com/HariSekhon/DevOps-Bash-tools bash-tools
cd bash-tools
make
make install
sets your shell profile to source this repo. See Individual Setup Parts below for more install/uninstall options.
.bashrc
, .bash.d/*.sh
, .gitconfig
, .vimrc
, .screenrc
, .tmux.conf
, .toprc
, .gitignore
....psqlrc
.*
- dot conf files for lots of common software eg. advanced .vimrc
, .gitconfig
, massive .gitignore
, .editorconfig
, .screenrc
, .tmux.conf
etc.
.vimrc
- contains many awesome vim tweaks, plus hotkeys for linting lots of different file types in place, including Python, Perl, Bash / Shell, Dockerfiles, JSON, YAML, XML, CSV, INI / Properties files, LDAP LDIF etc without leaving the editor!.screenrc
- fancy screen configuration including advanced colour bar, large history, hotkey reloading, auto-blanking etc..tmux.conf
- fancy tmux configuration include advanced colour bar and plugins, settings, hotkey reloading etc..gitconfig
- advanced Git configuration.gitignore
- extensive Git ignore of trivial files you shouldn't commit.bashrc
- shell tuning and sourcing of .bash.d/*.sh
.bash.d/*.sh
- thousands of lines of advanced bashrc code, aliases, functions and environment variables for:
make bash
to link .bashrc
/.bash_profile
and the .*
dot config files to your $HOME
directory to auto-inherit everythinglib/*.sh
- Bash utility libraries full of functions forDocker,environment,CI detection (Travis CI, Jenkins etc),port and HTTP url availability content checks etc.Sourced from all my other GitHub repos to make setting up Dockerized tests easier.setup/install_*.sh
- various simple to use installation scripts for common technologies likeAWS CLI,Azure CLI,GCloud SDK,Terraform,Ansible,MiniKube,MiniShift(Kubernetes / Redhat OpenShift/OKD dev VMs),Maven,Gradle,SBT,EPEL,RPMforge,Homebrew,Travis CI,Circle CI,AppVeyor,BuildKite,Parquet Toolsetc.clean_caches.sh
- cleans out OS package and programming language caches - useful to save space or reduce Docker image sizecurl_auth.sh
- shortens curl
command by auto-loading your OAuth2 / JWT API token or username & password from environment variables or interactive starred password prompt through a ram file descriptor to avoid placing them on the command line (which would expose your credentials in the process list or OS audit log files). Used by many other adjacent API querying scriptsldapsearch.sh
- shortens ldapsearch
command by inferring switches from environment variablesldap_user_recurse.sh
/ ldap_group_recurse.sh
- recurse Active Directory LDAP users upwards to find all parent groups, or groups downwards to find all nested users (useful for debugging LDAP integration and group-based permissions)find_duplicate_files*.sh
- finds duplicate files by size and/or checksum in given directory trees. Checksums are only done on files that already have matching byte counts for efficiencyfind_broken_links.sh
- find broken links with delays to avoid tripping defensesjvm_heaps*.sh
- show all your Java heap sizes for all running Java processes, and their total MB (for performance tuning and sizing)random_select.sh
- selects one of given args at random. Useful for sampling, running randomized subsets of large test suites etc.split.sh
- split large files into N parts (defaults to the number of your CPU cores) to parallelize operations on themssl_get_cert.sh
- gets a remote host:port
server's SSL cert in a format you can pipe, save and use locally, for example in Java truststoresssl_verify_cert.sh
- verifies a remote SSL certificate (battle tested more feature-rich version check_ssl_cert.pl
exists in the Advanced Nagios Plugins repo)urlencode.sh
/ urldecode.sh
- URL encode/decode quickly on the command line, in pipes etc.vagrant_hosts.sh
- generate /etc/hosts
output from a Vagrantfile
vagrant_total_mb.sh
- calculate the RAM committed to VMs in a Vagrantfile
mysql*.sh
- MySQL scripts:
mysql.sh
- shortens mysql
command to connect to MySQL by auto-populating switches from both standard environment variables like $MYSQL_TCP_PORT
, $DBI_USER
, $MYSQL_PWD
(see doc) and other common environment variables like $MYSQL_HOST
/ $HOST
, $MYSQL_USER
/ $USER
, $MYSQL_PASSWORD
/ $PASSWORD
, $MYSQL_DATABASE
/ $DATABASE
mysql_foreach_table.sh
- executes a SQL query against every table, replacing {db}
and {table}
in each iteration eg. select count(*) from {table}
mysql_*.sh
- various scripts using mysql.sh
for row counts, iterating each table, or outputting clean lists of databases and tables for quick scriptingmysqld.sh
- one-touch MySQL, boots docker container + drops in to mysql
shell, with /sql
scripts mounted in container for easy sourcing eg. source /sql/<name>.sql
. Optionally loads sample 'chinook' databasemariadb.sh
- one-touch MariaDB, boots docker container + drops in to mysql
shell, with /sql
scripts mounted in container for easy sourcing eg. source /sql/<name>.sql
. Optionally loads sample 'chinook' databasesqlite.sh
- one-touch SQLite, starts sqlite3 shell with sample 'chinook' database loadedpostgres*.sh
/ psql.sh
- PostgreSQL scripts:
postgres.sh
- one-touch PostgreSQL, boots docker container + drops in to psql
shell, with /sql
scripts mounted in container for easy sourcing eg. \i /sql/<name>.sql
. Optionally loads sample 'chinook' databasepsql.sh
- shortens psql
command to connect to PostreSQL by auto-populating switches from environment variables, using both standard postgres supported environment variables like $PG*
(see doc) as well as other common environment variables like $POSTGRESQL_HOST
/ $POSTGRES_HOST
/ $HOST
, $POSTGRESQL_USER
/ $POSTGRES_USER
/ $USER
, $POSTGRESQL_PASSWORD
/ $POSTGRES_PASSWORD
/ $PASSWORD
, $POSTGRESQL_DATABASE
/ $POSTGRES_DATABASE
/ $DATABASE
postgres_foreach_table.sh
- executes a SQL query against every table, replacing {db}
, {schema}
and {table}
in each iteration eg. select count(*) from {table}
postgres_*.sh
- various scripts using psql.sh
for row counts, iterating each table, or outputting clean lists of databases, schemas and tables for quick scriptingaws_*.sh
:
.envrc-aws
- copy to .envrc
for direnv
to auto-load AWS configuration settings such as AWS Profile, Compute Region, EKS cluster kubectl context etc.
.envrc-kubernetes
to set the kubectl
context isolated to current shell to prevent race conditions between shells and scripts caused by otherwise naively changing the global ~/.kube/config
contextaws_account_summary.sh
- prints AWS account summary in key = value
pairs for easy viewing / grepping of things like AccountMFAEnabled
, AccountAccessKeysPresent
, useful for checking whether the root account has MFA enabled and no access keys, comparing number of users vs number of MFA devices etc. (see also check_aws_root_account.py
in Advanced Nagios Plugins)aws_billing_alarm.sh
- creates a CloudWatch billing alarm and SNS topic with subscription to email you when you incur charges above a given threshold. This is often the first thing you want to do on an accountaws_budget_alarm.sh
- creates an AWS Budgets billing alarm and SNS topic with subscription to email you when both when you start incurring forecasted charges of over 80% of your budget, and 90% actual usage. This is often the first thing you want to do on an accountaws_cloudtrails_cloudwatch.sh
- lists Cloud Trails and their last delivery to CloudWatch Logs (should be recent)aws_cloudtrails_event_selectors.sh
- lists Cloud Trails and their event selectors to check each one has at least one event selectoraws_cloudtrails_s3_accesslogging.sh
- lists Cloud Trails buckets and their Access Logging prefix and target bucket. Checks S3 access logging is enabledaws_cloudtrails_s3_kms.sh
- lists Cloud Trails and whether their S3 buckets are KMS securedaws_cloudtrails_status.sh
- lists Cloud Trails status - if logging, multi-region and log file validation enabledaws_config_all_types.sh
- lists AWS Config recorders, checking all resource types are supported (should be true) and includes global resources (should be true)aws_config_recording.sh
- lists AWS Config recorders, their recording status (should be true) and their last status (should be success)aws_ecr_docker_build_push.sh
- builds a docker image and pushes it to AWS ECR with not just the latest
docker tag but also the current Git hashref and Git tagsaws_ecr_tag_image.sh
- tags an AWS ECR image with another tag without pulling and pushing itaws_ecr_tag_image_by_digest.sh
- same as above but tags an AWS ECR image found via digest (more accurate as reference by existing tag can be a moving target). Useful to recover images that have become untaggedaws_foreach_project.sh
- executes a templated command across all AWS named profiles configured in AWS CLIv2, replacing {profile}
in each iteration. Combine with other scripts for powerful functionality, auditing, setup etc. eg. aws_kube_creds.sh
to configure kubectl
config to all EKS clusters in all environmentsaws_foreach_region.sh
- executes a templated command against each AWS region enabled for the current account, replacing {region}
in each iteration. Combine with AWS CLI or scripts to find resources across regionsaws_harden_password_policy.sh
- strengthens AWS password policy according to CIS Foundations Benchmark recommendationsaws_iam_generate_credentials_report_wait.sh
- generates an AWS IAM credentials reportaws_ip_ranges.sh
- get all AWS IP ranges for a given Region and/or Service using the IP range APIaws_kms_key_rotation_enabled.sh
- lists AWS KMS keys and whether they have key rotation enabledaws_kube_creds.sh
- auto-loads all AWS EKS clusters credentials in the current --profile and --region so your kubectl is ready to rock on AWSaws_kubectl.sh
- runs kubectl commands safely fixed to a given AWS EKS cluster using config isolation to avoid concurrency race conditionsaws_meta.sh
- AWS EC2 Metadata API query shortcut. See also the official ec2-metadata shell script with more featuresaws_password_policy.sh
- prints AWS password policy in key = value
pairs for easy viewing / grepping (used by aws_harden_password_policy.sh
before and after to show the differences)aws_policies_attached_to_users.sh
- finds AWS IAM policies directly attached to users (anti-best practice) instead of groupsaws_policies_granting_full_access.sh
- finds AWS IAM policies granting full access (anti-best practice)aws_policies_unattached.sh
- lists unattached AWS IAM policiesaws_s3_access_logging.sh
- lists AWS S3 buckets and their access logging statusaws_spot_when_terminated.sh
- executes commands when the AWS EC2 instance running this script is notified of Spot Termination, acts as a latch mechanism that can be set any time after bootaws_ssm_put_param.sh
- reads a value from a command line argument or non-echo prompt and saves it to AWS Systems Manager Parameter Store. Useful for uploading a password without exposing it on your screenaws_users.sh
- list your AWS IAM usersaws_users_access_key_age.sh
- prints AWS users access key status and age (see also aws_users_access_key_age.py
in DevOps Python tools which can filter by age and status)aws_users_access_key_age_report.sh
- prints AWS users access key status and age using a bulk credentials report (faster for many users)aws_users_access_key_last_used.sh
- prints AWS users access keys last used dateaws_users_access_key_last_used_report.sh
- same as above using bulk credentials report (faster for many users)aws_users_last_used_report.sh
- lists AWS users password/access keys last used datesaws_users_mfa_active_report.sh
- lists AWS users password enabled and MFA enabled statusaws_users_mfa_serials.sh
- lists AWS users MFA serial numbers (differentiates Virtual vs Hardware MFAs)aws_users_pw_last_used.sh
- lists AWS users and their password last used datesetup/eksctl_cluster.sh
- downloads eksctl and creates an AWS EKS Kubernetes clustergcp_*.sh
/ gce_*.sh
/ gke_*.sh
/ gcr_*.sh
/ bigquery_*.sh
:
.envrc-gcp
- copy to .envrc
for direnv
to auto-load GCP configuration settings such as Project, Region, Zone, GKE cluster kubectl context or any other GCloud SDK setting to shorten gcloud
commands. Applies to the local shell environment only to avoid race conditions caused by naively changing the global gcloud config at ~/.config/gcloud/active_config
.envrc-kubernetes
to set the kubectl
context isolated to current shell to prevent race conditions between shells and scripts caused by otherwise naively changing the global ~/.kube/config
contextgcp_terraform_create_credential.sh
- creates a service account for Terraform with full permissions, creates and downloads a credential key json and even prints the export GOOGLE_CREDENTIALS
command to configure your environment to start using Terraform immediately. Run once for each project and combine with direnv
for fast easy management of multiple GCP projectsgcp_cli_create_credential.sh
- creates a GCloud SDK CLI service account with full owner permissions to all projects, creates and downloads a credential key json and even prints the export GOOGLE_CREDENTIALS
command to configure your environment to start using it. Avoids having to reauth to gcloud auth login
every day.gcp_spinnaker_create_credential.sh
- creates a Spinnaker service account with permissions on the current project, creates and downloads a credential key json and even prints the Halyard CLI configuration commands to use itgcp_info.sh
- huge Google Cloud inventory of deployed resources within the current project - Cloud SDK info plus all of the following (detects which services are enabled to query):
gcp_info_compute.sh
- GCE Virtual Machine instances, App Engine instances, Cloud Functions, GKE clusters, all Kubernetes objects across all GKE clusters (see kubernetes_info.sh
below for more details)gcp_info_storage.sh
- Cloud SQL info below, plus: Cloud Storage Buckets, Cloud Filestore, Cloud Memorystore Redis, BigTable clusters and instances, Datastore indexesgcp_info_cloud_sql.sh
- Cloud SQL instances, whether their backups are enabled, and all databases on each instance
gcp_info_cloud_sql_databases.sh
- lists databases inside each Cloud SQL instance. Included in gcp_info_cloud_sql.sh
gcp_info_cloud_sql_backups.sh
- lists backups for each Cloud SQL instance with their dates and status. Not included in gcp_info_cloud_sql.sh
for brevity. See also gcp_sql_export.sh
further down for more durable backups to GCSgcp_info_cloud_sql_users.sh
- lists users for each running Cloud SQL instance. Not included in gcp_info_cloud_sql.sh
for brevity but useful to audit usersgcp_info_networking.sh
- VPC Networks, Addresses, Proxies, Subnets, Routers, Routes, VPN Gateways, VPN Tunnels, Reservations, Firewall rules, Forwarding rules, Cloud DNS managed zones and verified domainsgcp_info_bigdata.sh
- Dataproc clusters and jobs in all regions, Dataflow jobs in all regions, PubSub messaging topics, Cloud IOT registries in all regionsgcp_info_tools.sh
- Cloud Source Repositories, Cloud Builds, Container Registry images across all major repos (gcr.io
, us.gcr.io
, eu.gcr.io
, asia.gcr.io
), Deployment Manager deploymentsgcp_info_auth_config.sh
- Auth Configurations, Organizations & Current Configgcp_info_projects.sh
- Projects names and IDsgcp_info_services.sh
- Services & APIs enabled
gcp_service_apis.sh
- lists all available GCP Services, APIs and their states (enabled/disabled), and provides is_service_enabled()
function used throughout the adjacent scripts to avoid errors and only show relevant enabled servicesgcp_info_accounts_secrets.sh
- IAM Service Accounts, Secret Manager secretsgcp_info_all_projects.sh
- same as above but for all detected projectsgcp_foreach_project.sh
- executes a templated command across all GCP projects, replacing {project_id}
and {project_name}
in each iteration (used by gcp_info_all_projects.sh
to call gcp_info.sh
)gcp_find_orphaned_disks.sh
- lists orphaned disks across one or more GCP projects (not attached to any compute instance)gcp_secrets_*.sh
- Google Secret Manager scripts:
gcp_secrets_to_kubernetes.sh
- loads GCP secrets to Kubernetes secrets in a 1-to-1 mapping. Can specify a list of secrets or auto-loads all GCP secrets with labels kubernetes-cluster
and kubernetes-namespace
matching the current kubectl
context (kcd
to the right namespace first, see .bash.d/kubernetes
). See also kubernetes_get_secret_values.sh
to debug the actual values that got loadedgcp_secrets_to_kubernetes_multipart.sh
- creates a Kubernetes secret from multiple GCP secrets (used to put private.pem
and public.pem
into the same secret to appear as files on volume mounts for apps in pods to use)gcp_secrets_labels.sh
- lists GCP Secrets and their labels, one per line suitable for quick views or shell pipelinesgcp_secrets_update_lable.sh
- updates all GCP secrets in current project matching label key=value with a new label valuegcp_service_account_credential_to_secret.sh
- creates GCP service account and exports a credential key to GCP Secret Manager (useful to stage or combine with gcp_secrets_to_kubernetes.sh
)gke_*.sh
- Google Kubernetes Engine scripts
gke_kube_creds.sh
- auto-loads all GKE clusters credentials in the current / given / all projects so your kubectl is ready to rock on GCPgke_kubectl.sh
- runs kubectl commands safely fixed to a given GKE cluster using config isolation to avoid concurrency race conditionsgke_cert_manager_firewall_rule.sh
- creates a GCP firewall rule for a given GKE cluster's masters to access Cert Manager admission webhook (auto-determines the master cidr and network)gke_nodepool_nodes.sh
- lists all nodes in a given nodepool on the current GKE cluster via kubectl labels (fast)gke_nodepool_nodes2.sh
- same as above via GCloud SDK (slow, iterates instance groups)gke_nodepool_taint.sh
- taints/untaints all nodes in a given GKE nodepool on the current cluster (see kubectl_node_taints.sh
for a quick way to see taints)gke_nodepool_drain.sh
- drains all nodes in a given nodepool (to decommission or rebuild the node pool, for example with different taints)gke_persistent_volumes_disk_mappings.sh
- lists GKE kubernetes persistent volumes to GCP persistent disk names, along with PVC and namespace, useful when investigating, resizing PVs etc.gcr_*.sh
- Google Container Registry scripts:
gcr_tag_latest.sh
- tags a given GCR docker image:tag
as latest
without pulling or pushing the docker imagegcr_tag_branch.sh
- tags a given GCR docker image:tag
with the branch from which it was built without pulling or pushing the docker imagegcr_tag_datetime.sh
- tags a given GCR docker image with its creation date and UTC timestamp (when it was uploaded or created by Google Cloud Build) without pulling or pushing the docker imagegcr_newest_image_tags.sh
- lists the tags for the given GCR docker image with the newest creation date (can use this to determine which image version to tag as latest
)gcr_tag_newest_image_as_latest.sh
- finds and tags the newest build of a given GCR docker image as latest
without pulling or pushing the docker imagegcr_alternate_tags.sh
- lists all the tags for a given GCR docker image:tag
(use arg <image>:latest
to see what version / build hashref / date tag has been tagged as latest
)gcr_list_tags.sh
- lists all the tags for a given GCR docker imagegcr_tags_timestamps.sh
- lists all the tags and their timestamps for a given GCR docker imagegcr_tags_old.sh
- lists tags older than N days for a given GCR docker imagegcr_delete_old_tags.sh
- deletes tags older than N days for a given GCR docker image. Lists the image:tags to be deleted and prompts for confirmation safetygcp_ci_build.sh
- script template for CI/CD to trigger Google Cloud Build to build docker container image with extra datetime and latest tagginggcp_ci_deploy_k8s.sh
- script template for CI/CD to deploy GCR docker image to GKE Kubernetes using Kustomizegce_*.sh
- Google Compute Engine scripts:
gce_meta.sh
- simple script to query the GCE metadata API from within Virtual Machinesgce_when_preempted.sh
- GCE VM preemption latch script - can be executed any time to set one or more commands to execute upon preemptiongce_is_preempted.sh
- GCE VM return true/false if preempted, callable from other scriptsgce_instance_service_accounts.sh
- lists GCE VM instance names and their service accountsgcp_firewall_disable_default_rules.sh
- disables those lax GCP default network "allow all" firewall rulesgcp_firewall_risky_rules.sh
- lists risky GCP firewall rules that are enabled and allow traffic from 0.0.0.0/0gcp_sql_*.sh
- Cloud SQL scripts:
gcp_sql_backup.sh
- creates Cloud SQL backupsgcp_sql_export.sh
- creates Cloud SQL exports to GCSgcp_sql_enable_automated_backups.sh
- enable automated daily Cloud SQL backupsgcp_sql_enable_point_in_time_recovery.sh
- enable point-in-time recovery with write-ahead logsgcp_sql_proxy.sh
- boots a Cloud SQL Proxy to all Cloud SQL instances for fast convenient direct psql
/ mysql
access via local sockets. Installs Cloud SQL Proxy if necessarygcp_sql_running_primaries.sh
- lists primary running Cloud SQL instancesgcp_sql_service_accounts.sh
- lists Cloud SQL instance service accounts. Useful for copying to IAM to grant permissions (eg. Storage Object Creator for SQL export backups to GCS)gcp_sql_create_readonly_service_account.sh
- creates a service account with read-only permissions to Cloud SQL eg. to run export backups to GCSgcp_sql_grant_instances_gcs_object_creator.sh
- grants minimal GCS objectCreator permission on a bucket to primary Cloud SQL instances for exportsgcp_cloud_schedule_sql_exports.sh
- creates Google Cloud Scheduler jobs to trigger a Cloud Function via PubSub to run Cloud SQL exports to GCS for all Cloud SQL instances in the current GCP project
bigquery_*.sh
- BigQuery scripts:
bigquery_list_datasets.sh
- lists BigQuery datasets in the current GCP projectbigquery_list_tables.sh
- lists BigQuery tables in a given datasetbigquery_list_tables_all_datasets.sh
- lists tables for all datasets in the current GCP projectbigquery_foreach_dataset.sh
- executes a templated command for each datasetbigquery_foreach_table.sh
- executes a templated command for each table in a given datasetbigquery_foreach_table_all_datasets.sh
- executes a templated command for each table in each dataset in the current GCP projectbigquery_table_row_count.sh
- gets the row count for a given tablebigquery_tables_row_counts.sh
- gets the row counts for all tables in a given datasetbigquery_tables_row_counts_all_datasets.sh
- gets the row counts for all tables in all datasets in the current GCP projectbigquery_generate_query_biggest_tables_across_datasets_by_row_count.sh
- generates a BigQuery SQL query to find the top 10 biggest tables by row countbigquery_generate_query_biggest_tables_across_datasets_by_size.sh
- generates a BigQuery SQL query to find the top 10 biggest tables by sizegcp_service_account*.sh
:
gcp_service_account_credential_to_secret.sh
- creates GCP service account and exports a credential key to GCP Secret Manager (useful to stage or combine with gcp_secrets_to_kubernetes.sh
)gcp_service_accounts_credential_keys.sh
- lists all service account credential keys and expiry dates, can grep 9999-12-31T23:59:59Z
to find non-expiring keysgcp_service_accounts_credential_keys_age.sh
- lists all service account credential keys age in daysgcp_service_accounts_credential_keys_expired.sh
- lists expired service account credential keys that should be removed and recreated if neededgcp_service_account_members.sh
- lists all members and roles authorized to use any service accounts. Useful for finding GKE Workload Identity mappingsgcp_iam_*.sh
:
gcp_iam_roles_in_use.sh
- lists GCP IAM roles in use in the current or all projectsgcp_iam_identities_in_use.sh
- lists GCP IAM identities (users/groups/serviceAccounts) in use in the current or all projectsgcp_iam_roles_granted_to_identity.sh
- lists GCP IAM roles granted to identities matching the regex (users/groups/serviceAccounts) in the current or all projectsgcp_iam_roles_granted_too_widely.sh
- lists GCP IAM roles which have been granted to allAuthenticatedUsers or even worse allUsers (unauthenticated) in one or all projectsgcp_iam_roles_with_direct_user_grants.sh
- lists GCP IAM roles which have been granted directly to users in violation of best-practice group-based managementgcp_iam_serviceaccount_members.sh
- lists members with permissions to use each GCP service accountgcp_iam_workload_identities.sh
- lists GKE Workload Identity integrations, uses gcp_iam_serviceaccount_members.sh
gcp_iam_users_granted_directly.sh
- lists GCP IAM users which have been granted roles directly in violation of best-practice group-based managementkubernetes_*.sh
- Kubernetes scripts:
.envrc-kubernetes
- copy to .envrc
for direnv
to auto-load the right Kubernetes kubectl
context isolated to current shell to prevent race conditions between shells and scripts caused by otherwise naively changing the global ~/.kube/config
contextkubernetes_info.sh
- huge Kubernetes inventory listing of deployed resources across all namespaces in the current cluster / kube context:
kubectl.sh
- runs kubectl commands safely fixed to a given context using config isolation to avoid concurrency race conditionskubectl_diff_apply.sh
- generates a diff and prompts to apply. See also kustomize_diff_apply.sh
kubectl_create_namespaces
- creates any namespaces in yaml files or stdin, a prerequisite for a diff on a blank install, used by adjacent scripts for safetykubernetes_foreach_context.sh
- executes a command across all kubectl contexts, replacing {context}
in each iteration (skips lab contexts docker
/ minikube
/ minishift
to avoid hangs since they're often offline)kubernetes_foreach_namespace.sh
- executes a command across all kubernetes namespaces in the current cluster context, replacing {namespace}
in each iteration
kubernetes_foreach_context.sh
and useful when combined with gcp_secrets_to_kubernetes.sh
to load all secrets from GCP to Kubernetes for the current cluster, or combined with gke_kube_creds.sh
and kubernetes_foreach_context.sh
for all clusters!kubernetes_api.sh
- finds Kubernetes API and runs your curl arguments against it, auto-getting authorization token and auto-populating OAuth authentication headerkubernetes_etcd_backup.sh
- creates a timestamped backup of the Kubernetes Etcd database for a kubeadm clusterkubeadm_join_cmd.sh
- outputs kubeadm join
command (generates new token) to join an existing Kubernetes cluster (used in vagrant kubernetes provisioning scripts)kubeadm_join_cmd2.sh
- outputs kubeadm join
command manually (calculates cert hash + generates new token) to join an existing Kubernetes clusterkubectl_exec.sh
- finds and execs to the first Kubernetes pod matching the given name regex, optionally specifying the container name regex to exec to, and shows the full generated kubectl exec
command line for claritykubectl_exec2.sh
- finds and execs to the first Kubernetes pod matching given pod filters, optionally specifying the container to exec to, and shows the full generated kubectl exec
command line for claritykubectl_pods_per_node.sh
- lists number of pods per node sorted descendingkubectl_pods_important.sh
- lists important pods and their nodes to check on schedulingkubectl_pods_colocated.sh
- lists pods from deployments/statefulsets that are colocated on the same nodekubectl_node_labels.sh
- lists nodes and their labels, one per line, easier to read visually or pipe in scriptingkubectl_node_taints.sh
- lists nodes and their taintskubectl_jobs_stuck.sh
- finds Kubernetes jobs stuck for hours or days with no completionskubectl_jobs_delete_stuck.sh
- prompts for confirmation to delete stuck Kubernetes jobs found by script abovekubectl_images.sh
- lists Kubernetes container images running on the current clusterkubectl_image_counts.sh
- lists Kubernetes container images running counts sorted descendingkubectl_pod_count.sh
- lists Kubernetes pods total running countkubectl_container_count.sh
- lists Kubernetes containers total running countkubectl_container_counts.sh
- lists Kubernetes containers running counts by name sorted descendingkubectl_secret_values.sh
- prints the keys and base64 decoded values within a given Kubernetes secret for quick debugging of Kubernetes secrets. See also: gcp_secrets_to_kubernetes.sh
kustomize_diff_apply.sh
- runs Kustomize build, precreates any namespaces, prompts with a diff of the proposed changes, and then applies if you accept them. See also kubectl_diff_apply.sh
docker_*.sh
/ dockerhub_*.sh
- Docker / DockerHub API scripts:
dockerhub_api.sh
- queries DockerHub API v2 with or without authentication ($DOCKERHUB_USER
& $DOCKERHUB_PASSWORD
/ $DOCKERHUB_TOKEN
)docker_api.sh
- queries a Docker Registry with optional basic authentication if $DOCKER_USER
& $DOCKER_PASSWORD
are setdocker_registry_list_images.sh
- lists images in a given private Docker Registrydocker_registry_list_tags.sh
- lists tags for a given image in a private Docker Registrydocker_registry_get_image_manifest.sh
- gets a given image:tag manifest from a private Docker Registrydocker_registry_tag_image.sh
- tags a given image with a new tag in a private Docker Registry via the API without pulling and pushing the image data (must faster and more efficient)dockerhub_list_tags.sh
- lists tags for a given DockerHub repo. See also dockerhub_show_tags.py in the DevOps Python tools repo.dockerhub_list_tags_by_last_updated.sh
- lists tags for a given DockerHub repo sorted by last updated timestamp descendingdockerhub_search.sh
- searches with a configurable number of returned items (older docker cli was limited to 25 results)clean_caches.sh
- cleans out OS package and programming language caches, call near end of Dockerfile
to reduce Docker image sizequay.io_api.sh
- queries the Quay.io API with OAuth2 authentication token $QUAY_TOKEN
kafka_*.sh
- scripts to make Kafka CLI usage easier including auto-setting Kerberos to source TGT from environment and auto-populating broker and zookeeper addresses. These are auto-added to the $PATH
when .bashrc
is sourced. For something similar for Solr, see solr_cli.pl
in the DevOps Perl Tools repo.zookeeper*.sh
- Apache ZooKeeper scripts:
zookeeper_client.sh
- shortens zookeeper-client
command by auto-populating the zookeeper quorum from the environment variable $ZOOKEEPERS
or else parsing the zookeeper quorum from /etc/**/*-site.xml
to make it faster and easier to connectzookeeper_shell.sh
- shortens Kafka's zookeeper-shell
command by auto-populating the zookeeper quorum from the environment variable $KAFKA_ZOOKEEPERS
and optionally $KAFKA_ZOOKEEPER_ROOT
to make it faster and easier to connecthive_*.sh
/ beeline*.sh
- Apache Hive scripts:
beeline.sh
- shortens beeline
command to connect to HiveServer2 by auto-populating Kerberos and SSL settings, zookeepers for HiveServer2 HA discovery if the environment variable $HIVE_HA
is set or using the $HIVESERVER_HOST
environment variable so you can connect with no arguments (prompts for HiveServer2 address if you haven't set $HIVESERVER_HOST
or $HIVE_HA
)
beeline_zk.sh
- same as above for HiveServer2 HA by auto-populating SSL and ZooKeeper service discovery settings (specify $HIVE_ZOOKEEPERS
environment variable to override). Automatically called by beeline.sh
if either $HIVE_ZOOKEEPERS
or $HIVE_HA
is set (the latter parses hive-site.xml
for the ZooKeeper addresses)hive_foreach_table.sh
- executes a SQL query against every table, replacing {db}
and {table}
in each iteration eg. select count(*) from {table}
hive_list_databases.sh
- list Hive databases, one per line, suitable for scripting pipelineshive_list_tables.sh
- list Hive tables, one per line, suitable for scripting pipelineshive_tables_metadata.sh
- lists a given DDL metadata field for each Hive table (to compare tables)hive_tables_location.sh
- lists the data location per Hive table (eg. compare external table locations)hive_tables_row_counts.sh
- lists the row count per Hive tablehive_tables_column_counts.sh
- lists the column count per Hive table impala*.sh
- Apache Impala scripts:
impala_shell.sh
- shortens impala-shell
command to connect to Impala by parsing the Hadoop topology map and selecting a random datanode to connect to its Impalad, acting as a cheap CLI load balancer. For a real load balancer see HAProxy config for Impala (and many other Big Data & NoSQL technologies). Optional environment variables $IMPALA_HOST
(eg. point to an explicit node or an HAProxy load balancer) and IMPALA_SSL=1
(or use regular impala-shell --ssl
argument pass through)impala_foreach_table.sh
- executes a SQL query against every table, replacing {db}
and {table}
in each iteration eg. select count(*) from {table}
impala_list_databases.sh
- list Impala databases, one per line, suitable for scripting pipelinesimpala_list_tables.sh
- list Impala tables, one per line, suitable for scripting pipelinesimpala_tables_metadata.sh
- lists a given DDL metadata field for each Impala table (to compare tables)impala_tables_location.sh
- lists the data location per Impala table (eg. compare external table locations)impala_tables_row_counts.sh
- lists the row count per Impala tableimpala_tables_column_counts.sh
- lists the column count per Impala tablehdfs_*.sh
- Hadoop HDFS scripts:
hdfs_checksum*.sh
- walks an HDFS directory tree and outputs HDFS native checksums (faster) or portable externally comparable CRC32, in serial or in parallel to save timehdfs_find_replication_factor_1.sh
/ hdfs_set_replication_factor_3.sh
- finds HDFS files with replication factor 1 / sets HDFS files with replication factor <=2 to replication factor 3 to repair replication safety and avoid no replica alarms during maintenance operations (see also Python API version in the DevOps Python Tools repo)hdfs_file_size.sh
/ hdfs_file_size_including_replicas.sh
- quickly differentiate HDFS files raw size vs total replicated sizehadoop_random_node.sh
- picks a random Hadoop cluster worker node, like a cheap CLI load balancer, useful in scripts when you want to connect to any worker etc. See also the read HAProxy Load Balancer configurations which focuses on master nodescloudera_*.sh
- Cloudera scripts:
cloudera_manager_api.sh
- script to simplify querying Cloudera Manager API using environment variables, prompts, authentication and sensible defaults. Built on top of curl_auth.sh
cloudera_manager_impala_queries*.sh
- queries Cloudera Manager for recent Impala queries, failed queries, exceptions, DDL statements, metadata stale errors, metadata refresh calls etc. Built on top of cloudera_manager_api.sh
cloudera_manager_yarn_apps.sh
- queries Cloudera Manager for recent Yarn apps. Built on top of cloudera_manager_api.sh
cloudera_navigator_api.sh
- script to simplify querying Cloudera Navigator API using environment variables, prompts, authentication and sensible defaults. Built on top of curl_auth.sh
cloudera_navigator_audit_logs.sh
- fetches Cloudera Navigator audit logs for given service eg. hive/impala/hdfs via the API, simplifying date handling, authentication and common settings. Built on top of cloudera_navigator_api.sh
cloudera_navigator_audit_logs_download.sh
- downloads Cloudera Navigator audit logs for each service by year. Skips existing logs, deletes partially downloaded logs on failure, generally retry safe (while true, Control-C, not kill -9
obviously). Built on top of cloudera_navigator_audit_logs.sh
git*.sh
- Git scripts:
git_foreach_branch.sh
- executes a command on all branches (useful in heavily version branched repos like in my Dockerfiles repo)git_foreach_repo.sh
- executes a command against all adjacent repos from a given repolist (used heavily by many adjacent scripts)git_foreach_modified.sh
- executes a command against each file with git modified statusgit_merge_all.sh
/ git_merge_master.sh
/ git_merge_master_pull.sh
- merges updates from master branch to all other branches to avoid drift on longer lived feature branches / version branches (eg. Dockerfiles repo)git_remotes_add_public_repos.sh
- auto-creates remotes for the 4 major public repositories (GitHub/GitLab/Bitbucket/Azure DevOps), useful for git pull -all
to fetch and merge updates from all providers in one commandgit_remotes_set_multi_origin.sh
- sets up multi-remote origin for unified push to automatically keep the 4 major public repositories in sync (especially useful for Bitbucket and Azure DevOps which don't have GitLab's auto-mirroring from GitHub feature)git_remotes_set_ssh_to_https.sh
- converts local repo's remote URLs from ssh to https (to get through corporate firewalls), auto-loads http auth tokens if found in environment variablesgit_remotes_set_https_to_ssh.sh
- converts local repo's remote URLs from https to ssh (more convenient with SSH keys instead of http auth tokens)git_repos_pull.sh
- pull multiple repos based on a source file mapping list - useful for easily sync'ing lots of Git repos among computersgit_repos_update.sh
- same as above but also runs the make update
build to install the latest dependencies, leverages the above scriptgit_log_empty_commits.sh
- find empty commits in git history (eg. if a git filter-branch
was run but --prune-empty
was forgotten, leaking metadata like subjects containing file names or other sensitive info)git_filter_branch_fix_author.sh
- rewrites Git history to replace author/committer name & email references (useful to replace default account commits). Powerful, read --help
and man git-filter-branch
carefully. Should only be used by Git Experts.git_submodules_update_repos.sh
- updates submodules (pulls and commits latest upstream github repo submodules) - used to cascade submodule updates throughout all my reposgithub_*.sh
- GitHub API scripts:
github_api.sh
- queryies the GitHub API. Can infer GitHub user, repo and authentication token from local checkout or environment ($GITHUB_USER
, $GITHUB_TOKEN
)github_foreach_repo.sh
- executes a templated command for each non-fork GitHub repo, replacing the {user}
and {repo}
in each iterationgithub_actions_runner.sh
- downloads, configures and runs a local GitHub Actions Runnergithub_runners.sh
- lists GitHub Actions runnersgithub_workflows.sh
- lists GitHub Actions workflows for a given repo (or auto-infers local repository)github_workflow_runs.sh
- lists GitHub Actions workflow runs for a given workflow id or namegithub_workflows_status.sh
- lists all GitHub Actions workflows and their statuses for a given repogithub_get_user_ssh_public_keys.sh
- fetches a given GitHub user's public SSH keys via the API for piping to ~/.ssh/authorized_keys
or adjacent toolsgithub_get_ssh_public_keys.sh
- fetches the currently authenticated GitHub user's public SSH keys via the API, similar to above but authenticated to get identifying key commentsgithub_add_ssh_public_keys.sh
- uploads SSH keys from local files or standard input to the currently authenticated GitHub account. Specify pubkey files (default: ~/.ssh/id_rsa.pub
) or read from standard input for piping from adjacent toolsgithub_delete_ssh_public_keys.sh
- deletes given SSH keys from the currently authenticated GitHub account by key id or title regex matchgithub_generate_status_page.sh
- generates a STATUS.md page by merging all the README.md headers for all of a user's non-forked GitHub repos or a given list of any repos etc.github_sync_repo_descriptions.sh
- syncs GitHub repo descriptions to GitLab & BitBucket reposgithub_repo_description.sh
- fetches the given repo's description (used by github_sync_repo_descriptions.sh
)github_repo_stars.sh
- fetches the stars, forks and watcher counts for a given repogithub_repo_teams.sh
- fetches the GitHub Enterprise teams or personal invited collaborators as well as their permisions for a given repo. Combine with github_foreach_repo.sh
to audit your all your personal or GitHub organization's reposgithub_repo_protect_branches.sh
- enables branch protections on the given repo. Can specify one or more branches to protect, otherwise finds and applies to any of master
, main
, develop
github_repos_disable_wiki.sh
- disables the Wiki on one or more given repos to prevent documentation fragmentation and make people use the centralized documentation tool eg. Confluence or Slitegithub_repos_sync_status.sh
- determines whether each GitHub repo's mirrors on GitLab / BitBucket are up to date with the latest commits, by querying all 3 APIs and comparing master branch hashrefsgitlab_*.sh
- GitLab API scripts:
gitlab_api.sh
- queries the GitLab API. Can infer GitLab user, repo and authentication token from local checkout or environment ($GITLAB_USER
, $GITLAB_TOKEN
)gitlab_foreach_repo.sh
- executes a templated command for each GitLab project/repo, replacing the {user}
and {project}
in each iterationgitlab_project_mirrors.sh
- lists each GitLab repo and whether it is a mirror or notgitlab_pull_mirror.sh
- trigger a GitLab pull mirroring for a given project's repo, or auto-infers project name from the local git repogitlab_set_project_description.sh
- sets the description for one or more projects using the GitLab APIgitlab_get_user_ssh_public_keys.sh
- fetches a given GitLab user's public SSH keys via the API, with identifying comments, for piping to ~/.ssh/authorized_keys
or adjacent toolsgitlab_get_ssh_public_keys.sh
- fetches the currently authenticated GitLab user's public SSH keys via the APIgitlab_add_ssh_public_keys.sh
- uploads SSH keys from local files or standard input to the currently authenticated GitLab account. Specify pubkey files (default: ~/.ssh/id_rsa.pub
) or read from standard input for piping from adjacent toolsgitlab_delete_ssh_public_keys.sh
- deletes given SSH keys from the currently authenticated GitLab account by key id or title regex matchgitlab_validate_ci_yaml.sh
- validates a .gitlab-ci.yml
file via the GitLab APIbitbucket_*.sh
- BitBucket API scripts:
bitbucket_api.sh
- queries the BitBucket API. Can infer BitBucket user, repo and authentication token from local checkout or environment ($BITBUCKET_USER
, $BITBUCKET_TOKEN
)bitbucket_foreach_repo.sh
- executes a templated command for each BitBucket repo, replacing the {user}
and {repo}
in each iterationbitbucket_set_project_description.sh
- sets the description for one or more repos using the BitBucket APIbitbucket_get_ssh_public_keys.sh
- fetches the currently authenticated BitBucket user's public SSH keys via the API for piping to ~/.ssh/authorized_keys
or adjacent toolsbitbucket_add_ssh_public_keys.sh
- uploads SSH keys from local files or standard input to the currently authenticated BitBucket account. Specify pubkey files (default: ~/.ssh/id_rsa.pub
) or read from standard input for piping from adjacent toolsazure_devops_api.sh
- queries Azure DevOps's API with authenticationazure_devops_foreach_repo.sh
- executes a templated command for each Azure DevOps repo, replacing {user}
, {org}
, {project}
and {repo}
in each iterationazure_devops_to_github_migration.sh
- migrates one or all Azure DevOps git repos to GitHub, including all branches and sets the default branch to match via the APIs to maintain the same checkout behaviourazure_devops_disable_repos.sh
- disables one or more given Azure DevOps repos (to prevent further pushes to them after migration to GitHub)jenkins_*.sh
- Jenkins CI scripts:
jenkins_cli.sh
- shortens jenkins-cli.jar
command by auto-inferring basic configuations, auto-downloading the CLI if absent, inferrings a bunch of Jenkins related variables like $JENKINS_URL
and authentication from $JENKINS_USER
/$JENKINS_PASSWORD
, or finds admin password from inside local docker container. Used heavily by jenkins.sh
one-shot setupjenkins_password.sh
- gets Jenkins admin password from local docker container. Used by jenkins_cli.sh
jenkins.sh
- one-touch Jenkins CI, launches in docker, installs plugins, validates Jenkinsfile
, configures jobs from $PWD/setup/jenkins-job.xml
and sets Pipeline to git remote origin's Jenkinsfile
, triggers build, tails results in terminal. Call from any repo top level directory with a Jenkinsfile
pipeline and setup/jenkins-job.xml
(all mine have it)concourse.sh
- one-touch Concourse CI, launches in docker, configures pipeline from $PWD/.concourse.yml
, triggers build, tails results in terminal, prints recent build statuses at end. Call from any repo top level directory with a .concourse.yml
config (all mine have it), mimicking structure of fully managed CI systems
fly.sh
- shortens fly
command to not have to specify target all the timegocd.sh
- one-touch GoCD CI, launches in docker, (re)creates config repo ($PWD/setup/gocd_config_repo.json
) from which to source pipeline(s) (.gocd.yml
), detects and enables agent(s) to start building. Call from any repo top level directory with a .gocd.yml
config (all mine have it), mimicking structure of fully managed CI systemsgocd_api.sh
- queries the GoCD APIteamcity_*.sh
- TeamCity CI API scripts:
teamcity.sh
- boots TeamCity CI cluster in docker, just click proceed and accept the EULA and it does the rest, it even creates an admin user and an API token for youteamcity_api.sh
- queries TeamCity's API, auto-handling authentication and other quirks of the APIteamcity_create_project.sh
- creates a TeamCity project using the APIteamcity_create_github_oauth_connection.sh
- creates a TeamCity GitHub OAuth VCS connection in the Root project, useful for bootstrapping projects from VCS configsteamcity_create_vcs_root.sh
- creates a TeamCity VCS root from a save configuration (XML or JSON), as downloaded by teamcity_export_vcs_roots.sh
teamcity_upload_ssh_key.sh
- uploads an SSH private key to a TeamCity project (for use in VCS root connections)teamcity_agents.sh
- lists TeamCity agents, their connected state, authorized state, whether enabled and up to dateteamcity_builds.sh
- lists the last 100 TeamCity builds along with the their state (eg. finished
) and status (eg. SUCCESS
/FAILURE
)teamcity_buildtypes.sh
- lists TeamCity buildTypes (pipelines) along with the their project and IDsteamcity_buildtype_create.sh
- creates a TeamCity buildType from a local JSON configuration (see teamcity_buildtypes_download.sh
)teamcity_buildtype_set_description_from_github.sh
- sync's a TeamCity buildType's description from its Github repo descriptionteamcity_buildtypes_set_description_from_github.sh
- sync's all TeamCity buildType descriptions from their GitHub repos where availableteamcity_export.sh
- downloads TeamCity configs to local JSON files in per-project directories mimicking native TeamCity directory structure and file namingteamcity_export_project_config.sh
- downloads TeamCity project config to local JSON filesteamcity_export_buildtypes.sh
- downloads TeamCity buildType config to local JSON filesteamcity_export_vcs_roots.sh
- downloads TeamCity VCS root config to local JSON filesteamcity_projects.sh
- lists TeamCity project IDs and Namesteamcity_project_set_versioned_settings.sh
- configures a project to track all changes to a VCS (eg. GitHub)teamcity_project_vcs_versioning.sh
- quickly toggle VCS versioning on/off for a given TeamCity project (useful for testing without auto-committing)teamcity_vcs_roots.sh
- lists TeamCity VCS root IDs and Namestravis_*.sh
- Travis CI API scripts (one of my all-time favourite CI systems):
travis_api.sh
- queries the Travis CI API with authentication using $TRAVIS_TOKEN
travis_repos.sh
- lists Travis CI repostravis_foreach_repo.sh
- executes a templated command against all Travis CI repostravis_repo_build.sh
- triggers a build for the given repotravis_repo_caches.sh
- lists caches for a given repotravis_repo_crons.sh
- lists crons for a given repotravis_repo_env_vars.sh
- lists environment variables for a given repotravis_repo_settings.sh
- lists settings for a given repotravis_repo_create_cron.sh
- creates a cron for a given repo and branchtravis_repo_delete_crons.sh
- deletes all crons for a given repotravis_repo_delete_caches.sh
- deletes all caches for a given repo (sometimes clears build problems)travis_delete_cron.sh
- deletes a Travis CI cron by IDtravis_repos_settings.sh
- lists settings for all repostravis_repos_caches.sh
- lists caches for all repostravis_repos_crons.sh
- lists crons for all repostravis_repos_create_cron.sh
- creates a cron for all repostravis_repos_delete_crons.sh
- deletes all crons for all repostravis_repos_delete_caches.sh
- deletes all caches for all repostravis_lint.sh
- lints a given .travis.yml
using the APIbuildkite_*.sh
- BuildKite API scripts:
buildkite_pipelines.sh
- list buildkite pipelines for your $BUILDKITE_ORGANIZATION
/ $BUILDKITE_USER
buildkite_foreach_pipeline.sh
- executes a templated command for each Buildkite pipeline, replacing the {user}
and {pipeline}
in each iterationbuildkite_agent.sh
- runs a buildkite agent locally on Linux or Mac, or in Docker with choice of Linux distrosbuildkite_agents.sh
- lists the Buildkite agents connected along with their hostname, IP, started dated and agent detailsbuildkite_pipelines.sh
- lists Buildkite pipelinesbuildkite_create_pipeline.sh
- create a Buildkite pipeline from a JSON configuration (like from buildkite_get_pipeline.sh
or buildkite_save_pipelines.sh
)buildkite_get_pipeline.sh
- gets details for a specific Buildkite pipeline in JSON formatbuildkite_pipeline_skip_settings.sh
- lists the skip intermediate build settings for one or more given BuildKite pipelinesbuildkite_pipeline_set_skip_settings.sh
- configures given or all BuildKite pipelines to skip intermediate builds and cancel running builds in favour of latest buildbuildkite_cancel_scheduled_builds.sh
- cancels BuildKite scheduled builds (to clear a backlog due to offline agents and just focus on new builds)buildkite_cancel_running_builds.sh
- cancels BuildKite running builds (to clear them and restart new later eg. after agent / environment change / fix)buildkite_rebuild_cancelled_builds.sh
- triggers rebuilds of last N cancelled builds in current pipelinebuildkite_rebuild_failed_builds.sh
- triggers rebuilds of last N failed builds in current pipeline (eg. after agent restart / environment change / fix)buildkite_rebuild_all_pipelines_last_cancelled.sh
- triggers rebuilds of the last cancelled build in each pipeline in the organizationbuildkite_rebuild_all_pipelines_last_failed.sh
- triggers rebuilds of the last failed build in each pipeline in the organizationbuildkite_retry_jobs_dead_agents.sh
- triggers job retries where jobs failed due to killed agents, continuing builds from that point and replacing their false negative failed status with the real final status, slightly better than rebuilding entire jobs which happen under a new buildbuildkite_recreate_pipeline.sh
- recreates a pipeline to wipe out all stats (see url and badge caveats in --help
)buildkite_running_builds.sh
- lists running builds and the agent they're running onbuildkite_save_pipelines.sh
- saves all BuildKite pipelines in your $BUILDKITE_ORGANIZATION
to local JSON files in $PWD/.buildkite-pipelines/
buildkite_set_pipeline_description.sh
- sets the description of one or more pipelines using the BuildKite APIbuildkite_set_pipeline_description_from_github.sh
- sets a Buildkite pipeline description to match its source GitHub repobuildkite_sync_pipeline_descriptions_from_github.sh
- for all BuildKite pipelines sets each description to match its source GitHub repobuildkite_trigger.sh
- triggers BuildKite build job for a given pipelinebuildkite_trigger_all.sh
- same as above but for all pipelinesappveyor_api.sh
- queries AppVeyor's API with authenticationcodeship_api.sh
- queries CodeShip's API with authenticationdrone_api.sh
- queries Drone.io's API with authenticationshippable_api.sh
- queries Shippable's API with authenticationwercker_app_api.sh
- queries Wercker's Applications API with authenticationatlassian_cidr_ranges.sh
- lists Atlassian's IPv4 and/or IPv6 cidr ranges via its APIcloudflare_*.sh
- Cloudflare API queries and reports:
cloudflare_api.sh
- queries the Cloudflare API, handling authentication from $CLOUDFLARE_TOKEN
cloudflare_cidr_ranges.sh
- lists Cloudflare's IPv4 and/or IPv6 cidr ranges via its APIcloudflare_custom_certificates.sh
- lists any custom SSL certificates in a given Cloudflare zone along with their status and expiry datecloudflare_dns_records.sh
- lists any Cloudflare DNS records for a zone, including the type and ttlcloudflare_dns_records_all_zones.sh
- same as above but for all zonescloudflare_dnssec.sh
- lists the Cloudflare DNSSec status for all zonescloudflare_firewall_rules.sh
- lists Cloudflare Firewall rules, optionally with filter expressioncloudflare_firewall_access_rules.sh
- lists Cloudflare Firewall Access rules, optionally with filter expressioncloudflare_foreach_account.sh
- executes a templated command for each Cloudflare account, replacing the {account_id}
and {account_name}
in each iteration (useful for chaining with cloudflare_api.sh
)cloudflare_foreach_zone.sh
- executes a templated command for each Cloudflare zone, replacing the {zone_id}
and {zone_name}
in each iteration (useful for chaining with cloudflare_api.sh
, used by adjacent cloudflare_*_all_zones.sh
scripts)cloudflare_purge_cache.sh
- purges the entire Cloudflare cachecloudflare_ssl_verified.sh
- gets the Cloudflare zone SSL verification status for a given zonecloudflare_ssl_verified_all_zones.sh
- same as above for all zonescloudflare_zones.sh
- lists Cloudflare zone names and IDs (needed for writing Terraform Cloudflare code)pingdom_*.sh
- Pingdom API queries and reports for status, latency, average response times, latency averages by hour, SMS credits, outages periods and durations over the last year etc.
pingdom_api.sh
- Solarwinds Pingdom API query scriptpingdom_foreach_check.sh
- executes a templated command against each Pingdom check, replacing the {check_id}
and {check_name}
in each iterationpingdom_checks.sh
- show all Pingdom checks, status and latenciespingdom_checks_outages.sh
/ pingdom_checks_outages.sh
- show one or all Pingdom checks outage histories for the last yearpingdom_checks_average_response_times.sh
- shows the average response times for all Pingdom checks for the last weekpingdom_check_latency_by_hour.sh
/ pingdom_checks_latency_by_hour.sh
- shows the average latency for one or all Pingdom checks broken down by hour of the day, over the last weekpingdom_sms_credits.sh
- gets the remaining number of Pingdom SMS creditsperl_cpanm_install.sh
- bulk installs CPAN modules from mix of arguments / file lists / stdin, accounting for User vs System installs, root vs user sudo, Perlbrew / Google Cloud Shell environments, Mac vs Linux library paths, ignore failure option, auto finds and reads build failure log for quicker debugging showing root cause error in CI builds logs etcperl_cpanm_install_if_absent.sh
- installs CPAN modules not already in Perl libary path (OS or CPAN installed) for faster installations only where OS packages are already providing some of the modules, reducing time and failure rates in CI buildsperlpath.sh
- prints all Perl libary search paths, one per lineperl_find_library_path.sh
- finds directory where a CPAN module is installed - without args finds the Perl library baseperl_find_library_executable.sh
- finds directory where a CPAN module's CLI program is installed (system vs user, useful when it gets installed to a place that isn't in your $PATH
, where which
won't help)perl_find_unused_cpan_modules.sh
- finds CPAN modules that aren't used by any programs in the current directory treeperl_find_duplicate_cpan_requirements.sh
- finds duplicate CPAN modules listed for install more than once under the directory tree (useful for deduping module installs in a project and across submodules)perl_generate_fatpacks.sh
- creates Fatpacks - self-contained Perl programs with all CPAN modules built-inpython_compile.sh
- byte-compiles Python scripts and libraries into .pyo
optimized filespython_pip_install.sh
- bulk installs PyPI modules from mix of arguments / file lists / stdin, accounting for User vs System installs, root vs user sudo, VirtualEnvs / Anaconda / GitHub Workflows/ Google Cloud Shell, Mac vs Linux library paths, and ignore failure optionpython_pip_install_if_absent.sh
- installs PyPI modules not already in Python libary path (OS or pip installed) for faster installations only where OS packages are already providing some of the modules, reducing time and failure rates in CI buildspython_pip_install_for_script.sh
- installs PyPI modules for given script(s) if not already installed. Used for dynamic individual script dependency installation in the DevOps Python tools repopython_pip_reinstall_all_modules.sh
- reinstalls all PyPI modules which can fix some issuespythonpath.sh
- prints all Python libary search paths, one per linepython_find_library_path.sh
- finds directory where a PyPI module is installed - without args finds the Python library basepython_find_library_executable.sh
- finds directory where a PyPI module's CLI program is installed (system vs user, useful when it gets installed to a place that isn't in your $PATH
, where which
won't help)python_find_unused_pip_modules.sh
- finds PyPI modules that aren't used by any programs in the current directory treepython_find_duplicate_pip_requirements.sh
- finds duplicate PyPI modules listed for install under the directory tree (useful for deduping module installs in a project and across submodules)python_translate_import_module.sh
- converts Python import modules to PyPI module names, used by python_pip_install_for_script.sh
python_translate_module_to_import.sh
- converts PyPI module names to Python import names, used by python_pip_install_if_absent.sh
and python_find_unused_pip_modules.sh
python_pyinstaller.sh
- creates PyInstaller self-contained Python programs with Python interpreter and all PyPI modules includedpython_pypi_versions.sh
- prints all available versions of a given PyPi module using the APIgolang_get_install.sh
- bulk installs Golang modules from mix of arguments / file lists / stdingolang_get_install_if_absent.sh
- same as above but only if the package binary isn't already available in $PATH
golang_rm_binaries.sh
- deletes binaries of the same name adjacent to .go
files. Doesn't delete you bin/
etc as these are often real deployed applications rather than development binariesmp3_set_artist.sh
/ mp3_set_album.sh
- sets the artist / album tag for all mp3 files under given directories. Useful for grouping artists/albums and audiobook author/books (eg. for correct importing into Mac's Books.app)mp3_set_track_name.sh
- sets the track name metadata for mp3 files under given directories to follow their filenames. Useful for correctly displaying audiobook progress / chapters etc.mp3_set_track_order.sh
- sets the track order metadata for mp3 files under given directories to follow the lexical file naming order. Useful for correctly ordering album songs and audiobook chapters (eg. for Mac's Books.app). Especially useful for enforcing global ordering on multi-CD audiobooks after grouping into a single audiobook using mp3_set_album.sh
(otherwise default track numbers in each CD interleave in Mac's Books.app)40+ Spotify API scripts (used extensively to manage my Spotify-Playlists repo):
spotify_playlists*.sh
- list playlists in either <id> <name>
or JSON formatspotify_playlist_tracks*.sh
- gets playlist contents as track URIs / Artists - Track
/ CSV format - useful for backups or exports between music systemsspotify_backup.sh
- backup all Spotify playlists as well as the ordered list of playlistsspotify_backup_playlist*.sh
- backup Spotify playlists to local files in both human readable Artist - Track
format and Spotify URI format for easy restores or adding to new playlistsspotify_search*.sh
- search Spotify's library for tracks / albums / artists getting results in human readable format, JSON, or URI formats for easy loading to Spotify playlistsspotify_release_year.sh
- searches for a given track or album and finds the original release yearspotify_uri_to_name.sh
- convert Spotify track / album / artist URIs to human readable Artist - Track
/ CSV format. Takes Spotify URIs, URL links or just IDs. Reads URIs from files or standard inputspotify_create_playlist.sh
- creates a Spotify playlist, either public or privatespotify_rename_playlist.sh
- renames a Spotify playlistspotify_set_playlists_public.sh
/ spotify_set_playlists_private.sh
- sets one or more given Spotify playlists to public / privatespotify_add_to_playlist.sh
- adds tracks to a given playlist. Takes a playlist name or ID and Spotify URIs in any form from files or standard input. Can be combined with many other tools listed here which output Spotify URIs, or appended from other playlists. Can also be used to restore a spotify playlist from backupsspotify_delete_from_playlist.sh
- deletes tracks from a given playlist. Takes a playlist name or ID and Spotify URIs in any form from files or standard input, optionally prefixed with a track position to remove only specific occurrences (useful for removing duplicates from playlists)spotify_delete_from_playlist_if_in_other_playlists.sh
- deletes tracks from a given playlist if their URIs are found in the subsequently given playlistsspotify_duplicate_uri_in_playlist.sh
- finds duplicate Spotify URIs in a given playlist (these are guaranteed exact duplicate matches), returns all but the first occurrence and optionally their track positions (zero-indexed to align with the Spotify API for easy chaining with other tools)spotify_duplicate_tracks_in_playlist.sh
- finds duplicate Spotify tracks in a given playlist (these are idential Artist - Track
name matches, which may be from different albums / singles)spotify_delete_duplicates_in_playlist.sh
- deletes duplicate Spotify URI tracks (identical) in a given playlist using spotify_duplicate_uri_in_playlist.sh
and spotify_delete_from_playlist.sh
spotify_delete_duplicate_tracks_in_playlist.sh
- deletes duplicate Spotify tracks (name matched) in a given playlist using spotify_duplicate_tracks_in_playlist.sh
and spotify_delete_from_playlist.sh
spotify_delete_any_duplicates_in_playlist.sh
- calls both of the above scripts to first get rid of duplicate URIs and then remove any other duplicates by track name matchesspotify_playlist_tracks_uri_in_year.sh
- finds track URIs in a playlist where their original release date is in a given year or decade (by regex match)spotify_playlist_uri_offset.sh
- finds the offset of a given track URI in a given playlist, useful to find positions to resume processing a large playlistspotify_top_artists*.sh
- lists your top artists in URI or human readable formatspotify_top_tracks*.sh
- lists top tracks in URI or human readable formatspotify_liked_tracks*.sh
- lists your Liked Songs
in URI or human readable formatsspotify_liked_artists*.sh
- list artists from Liked Songs
in URI or human readable formatsspotify_artists_followed*.sh
- lists all followed artists in URI or human readable formatsspotify_follow_artists.sh
- follows artists for the given URIs from files or standard inputspotify_follow_liked_artists.sh
- follows artists with N or more tracks in your Liked Songs
spotify_set_tracks_uri_to_liked.sh
- sets a list of spotify track URIs to 'Liked' so they appear in the Liked Songs
playlist. Useful for marking all the tracks in your best playlists as favourite tracks, or for porting historical Starred
tracks to the newer Liked Songs
spotify_foreach_playlist.sh
- executes a templated command against all playlists, replacing {playlist}
and {playlist_id}
in each iterationspotify_playlist_name_to_id.sh
/ spotify_playlist_id_to_name.sh
- convert playlist names <=> IDsspotify_api_token.sh
- gets a Spotify authentication token using either Client Credentials or Authorization Code authentication flows, the latter being able to read/modify private user data, automatically used by spotify_api.sh
spotify_api.sh
- query any Spotify API endpoint with authentication, used by adjacent spotify scriptsinstall_packages.sh
- installs package lists from arguments, files or stdin on major linux distros and Mac, detecting the package manager and invoking the right install commands, with sudo
if not root. Works on RHEL / CentOS / Fedora, Debian / Ubuntu, Alpine, and Mac Homebrew. Leverages and supports all features of the distro / OS specific install scripts listed belowinstall_packages_if_absent.sh
- installs package lists if they're not already installed, saving time and minimizing install logs / CI logs, same support list as aboveyum_install_packages.sh
/ yum_remove_packages.sh
- installs RPM lists from arguments, files or stdin. Handles Yum + Dnf behavioural differences, calls sudo
if not root, auto-attempts variations of python/python2/python3 package names. Avoids yum slowness by checking if rpm is installed before attempting to install it, accepts NO_FAIL=1
env var to ignore unavailable / changed package names (useful for optional packages or attempts for different package names across RHEL/CentOS/Fedora versions)yum_install_packages_if_absent.sh
- installs RPMs only if not already installed and not a metapackage provided by other packages (eg. vim
metapackage provided by vim-enhanced
), saving time and minimizing install logs / CI logs, plus all the features of yum_install_packages.sh
aboverpms_filter_installed.sh
/ rpms_filter_not_installed.sh
- pipe filter packages that are / are not installed for easy script pipingapt_install_packages.sh
/ apt_remove_packages.sh
- installs Deb package lists from arguments, files or stdin. Auto calls sudo
if not root, accepts NO_FAIL=1
env var to ignore unavailable / changed package names (useful for optional packages or attempts for different package names across Debian/Ubuntu distros/versions)apt_install_packages_if_absent.sh
- installs Deb packages only if not already installed, saving time and minimizing install logs / CI logs, plus all the features of apt_install_packages.sh
aboveapt_wait.sh
- blocking wait on concurrent apt locks to avoid failures and continue when available, mimicking yum's waiting behaviour rather than error'ing outdebs_filter_installed.sh
/ debs_filter_not_installed.sh
- pipe filter packages that are / are not installed for easy script pipingapk_install_packages.sh
/ apk_remove_packages.sh
- installs Alpine apk package lists from arguments, files or stdin. Auto calls sudo
if not root, accepts NO_FAIL=1
env var to ignore unavailable / changed package names (useful for optional packages or attempts for different package names across Alpine versions)apk_install_packages_if_absent.sh
- installs Alpine apk packages only if not already installed, saving time and minimizing install logs / CI logs, plus all the features of apk_install_packages.sh
aboveapk_filter_installed.sh
/ apk_filter_not_installed.sh
- pipe filter packages that are / are not installed for easy script pipingbrew_install_packages.sh
/ brew_remove_packages.sh
- installs Mac Hombrew package lists from arguments, files or stdin. Accepts NO_FAIL=1
env var to ignore unavailable / changed package names (useful for optional packages or attempts for different package names across versions)brew_install_packages_if_absent.sh
- installs Mac Homebrew packages only if not already installed, saving time and minimizing install logs / CI logs, plus all the features of brew_install_packages.sh
abovebrew_filter_installed.sh
/ brew_filter_not_installed.sh
- pipe filter packages that are / are not installed for easy script pipingmake system-packages
before make pip
/ make cpan
to shorten how many packages need installing, reducing chances of build failurescheck_*.sh
- extensive collection of generalized tests - these run against all my GitHub repos via CI. Some examples:
Programming language linting:
Build System, Docker & CI linting:
csv_header_indices.sh
- list CSV headers with their zero indexed numbers, useful reference when coding against column positions
Data format validation validate_*.py
from DevOps Python Tools repo:
json2yaml.sh
- converts JSON to YAML
yaml2json.sh
- converts YAML to JSON - needed for some APIs like GitLab CI linting (see Gitlab section above)
DevOps Python Tools - 80+ DevOps CLI tools for AWS, GCP, Hadoop, HBase, Spark, Log Anonymizer, Ambari Blueprints, AWS CloudFormation, Linux, Docker, Spark Data Converters & Validators (Avro / Parquet / JSON / CSV / INI / XML / YAML), Elasticsearch, Solr, Travis CI, Pig, IPython
SQL Scripts - 100+ SQL Scripts - PostgreSQL, MySQL, AWS Athena, Google BigQuery
Templates - dozens of Code & Config templates - AWS, GCP, Docker, Jenkins, Terraform, Vagrant, Puppet, Python, Bash, Go, Perl, Java, Scala, Groovy, Maven, SBT, Gradle, Make, GitHub Actions Workflows, CircleCI, Jenkinsfile, Makefile, Dockerfile, docker-compose.yml, M4 etc.
Kubernetes configs - Kubernetes YAML configs - Best Practices, Tips & Tricks are baked right into the templates for future deployments
The Advanced Nagios Plugins Collection - 450+ programs for Nagios monitoring your Hadoop & NoSQL clusters. Covers every Hadoop vendor's management API and every major NoSQL technology (HBase, Cassandra, MongoDB, Elasticsearch, Solr, Riak, Redis etc.) as well as message queues (Kafka, RabbitMQ), continuous integration (Jenkins, Travis CI) and traditional infrastructure (SSL, Whois, DNS, Linux)
DevOps Perl Tools - 25+ DevOps CLI tools for Hadoop, HDFS, Hive, Solr/SolrCloud CLI, Log Anonymizer, Nginx stats & HTTP(S) URL watchers for load balanced web farms, Dockerfiles & SQL ReCaser (MySQL, PostgreSQL, AWS Redshift, Snowflake, Apache Drill, Hive, Impala, Cassandra CQL, Microsoft SQL Server, Oracle, Couchbase N1QL, Dockerfiles, Pig Latin, Neo4j, InfluxDB), Ambari FreeIPA Kerberos, Datameer, Linux...
HAProxy Configs - 80+ HAProxy Configs for Hadoop, Big Data, NoSQL, Docker, Elasticsearch, SolrCloud, HBase, Cloudera, Hortonworks, MapR, MySQL, PostgreSQL, Apache Drill, Hive, Presto, Impala, ZooKeeper, OpenTSDB, InfluxDB, Prometheus, Kibana, Graphite, SSH, RabbitMQ, Redis, Riak, Rancher etc.
Dockerfiles - 50+ DockerHub public images for Docker & Kubernetes - Hadoop, Kafka, ZooKeeper, HBase, Cassandra, Solr, SolrCloud, Presto, Apache Drill, Nifi, Spark, Mesos, Consul, Riak, OpenTSDB, Jython, Advanced Nagios Plugins & DevOps Tools repos on Alpine, CentOS, Debian, Fedora, Ubuntu, Superset, H2O, Serf, Alluxio / Tachyon, FakeS3
Perl Lib - Perl utility library
PyLib - Python utility library
Lib-Java - Java utility library
Nagios Plugin Kafka - Kafka Nagios Plugin written in Scala with Kerberos support
Pre-built Docker images are available for those repos (which include this one as a submodule) and the "docker available" icon above links to an uber image which contains all my github repos pre-built. There are Centos, Alpine, Debian and Ubuntu versions of this uber Docker image containing all repos.
Optional, only if you don't do the full make install
.
Install only OS system package dependencies and AWS CLI via Python Pip (doesn't symlink anything to $HOME
):
make
Adds sourcing to .bashrc
and .bash_profile
and symlinks dot config files to $HOME
(doesn't install OS system package dependencies):
make link
undo via
make unlink
Install only OS system package dependencies (doesn't include AWS CLI or Python packages):
make system-packages
Install AWS CLI:
make aws
Install Azure CLI:
make azure
Install GCP GCloud SDK (includes CLI):
make gcp
Install GCP GCloud Shell environment (sets up persistent OS packages and all home directory configs):
make gcp-shell
Install generically useful Python CLI tools and modules (includes AWS CLI, autopep8 etc):
make python
> make help
Usage:
Common Options:
make help show this message
make build installs all dependencies - OS packages and any language libraries via native tools eg. pip, cpanm, gem, go etc that are not available via OS packages
make build-retry retries 'make build' x 3 until success to try to mitigate temporary upstream repo failures triggering false alerts in CI systems
make ci prints env, then runs 'build-retry' for more resilient CI builds with debugging
make printenv prints environment variables, CPU cores, OS release, $PWD, Git branch, hashref etc. Useful for CI debugging
make system-packages installs OS packages only (detects OS via whichever package manager is available)
make test run tests
make clean removes compiled / generated files, downloaded tarballs, temporary files etc.
make submodules initialize and update submodules to the right release (done automatically by build / system-packages)
make init same as above, often useful to do in CI systems to get access to additional submodule provided targets such as 'make ci'
make cpan install any modules listed in any cpan-requirements.txt files if not already installed
make pip install any modules listed in any requirements.txt files if not already installed
make python-compile compile any python files found in the current directory and 1 level of subdirectory
make pycompile
make github open browser at github project
make readme open browser at github's README
make github-url print github url and copy to clipboard
make status open browser at Github CI Builds overview Status page for all projects
make ls print list of code files in project
make wc show counts of files and lines
Repo specific options:
make install builds all script dependencies, installs AWS CLI, symlinks all config files to $HOME and adds sourcing of bash profile
make link symlinks all config files to $HOME and adds sourcing of bash profile
make unlink removes all symlinks pointing to this repo's config files and removes the sourcing lines from .bashrc and .bash_profile
make python-desktop installs all Python Pip packages for desktop workstation listed in setup/pip-packages-desktop.txt
make perl-desktop installs all Perl CPAN packages for desktop workstation listed in setup/cpan-packages-desktop.txt
make ruby-desktop installs all Ruby Gem packages for desktop workstation listed in setup/gem-packages-desktop.txt
make golang-desktop installs all Golang packages for desktop workstation listed in setup/go-packages-desktop.txt
make nodejs-desktop installs all NodeJS packages for desktop workstation listed in setup/npm-packages-desktop.txt
make desktop installs all of the above + many desktop OS packages listed in setup/
make mac-desktop all of the above + installs a bunch of major common workstation software packages like Ansible, Terraform, MiniKube, MiniShift, SDKman, Travis CI, CCMenu, Parquet tools etc.
make linux-desktop
make ls-scripts print list of scripts in this project, ignoring code libraries in lib/ and .bash.d/
make kubernetes installs kubectl and kustomize to ~/bin/
make terraform installs major terraform versions to ~/bin/ (useful during upgrades or switching between environments)
make vim installs Vundle and plugins
make tmux installs TMUX TPM and plugin for kubernetes context
make ccmenu installs and (re)configures CCMenu to watch this and all other major HariSekhon GitHub repos
make status open the Github Status page of all my repos build statuses across all CI platforms
make aws installs AWS CLI tools
make azure installs Azure CLI
make gcp installs Google Cloud SDK
make aws-shell sets up AWS Cloud Shell: installs core packages and links configs
(maintains itself across future Cloud Shells via .aws_customize_environment hook)
make gcp-shell sets up GCP Cloud Shell: installs core packages and links configs
(maintains itself across future Cloud Shells via .customize_environment hook)
make azure-shell sets up Azure Cloud Shell (limited compared to gcp-shell, doesn't install OS packages since there is no sudo)
Now exiting usage help with status code 3 to explicitly prevent silent build failures from stray 'help' arguments
make: *** [help] Error 3
(make help
exits with error code 3 like most of my programs to differentiate from build success to make sure a stray help
argument doesn't cause silent build failure with exit code 0)
一、部署Jenkins 1、拉取Jenkins镜像 docker pull jenkins/jenkins:lts 2、查看端口占用情况 列出所有端口 netstat -ntlp 查看8080端口占用情况 lsof -i tcp:8080 3、创建并启动一个Jenkins容器 mkdir -p /app/docker_v/jenkins_home chown -R 1000:1000 /app/d
Gitlab 服务的安装文档: https://about.gitlab.com/install/ 环境要求: https://docs.gitlab.com/ee/install/requirements.html 1、Ubuntu 系统环境准备 #1:配置 ubuntu 远程连接: jack@ubuntu:~$ sudo su - root [sudo] password for jack
#1:java 环境: 各 wenb 服务器准备 tomcat 运行环境: # useradd www -u 2000 #mkdir/apps&&cd /apps # tar xvfjdk-8u181-linux-x64.tar.gz # ln -sv /apps/jdk1.8.0_181//apps/jdk # vim/etc/profile export HISTTIMEFORMAT="%F
官方网址:https://jenkins.io/zh/ 1、配置 java 环境并部署 jenkins #1:java 环境配置: root@jenkins:/usr/local/src# tar xvf jdk-8u192-linux-x64.tar.gz root@jenkins:/usr/local/src# ln -sv /usr/local/src/jdk1.8.0_192/ /usr/
0. 机器配置(node1): 运行内存:6 G 磁 盘:50G 镜像文件:CentOS-7-x86_64-Everything-2009.iso 注意:使用最小安装时,系统命令不完整,个别命令需手动重新安装(例如 vim 、 wget 、 curl 、 ifconfig( net-tools) 、 zip、unzip) 0.1 基础服务: 0.1.1 网络配置: 在配置好虚拟网卡的情况下,修改
概述 DevOps 是 开发 和 运维 这两个词的缩写。DevOps 是一套最佳实践方法论,旨在应用和服务的生命周期中促进 IT 专业人员(开发人员、运维人员和支持人员)之间的协作和交流,最终实现: 持续整合 - 从开发到运维和支持的轻松切换 持续部署 - 持续发布,或尽可能经常的发布 持续反馈 - 在应用和服务生命周期的各个阶段寻求来自利益相关方的反馈 DevOps 改变了员工的工作思维方式;D
Hari Sekhon - DevOps Python Tools git.io/pytools AWS, Docker, Spark, Hadoop, HBase, Hive, Impala, Python & Linux Tools DevOps, Cloud, Big Data, NoSQL, Python & Linux tools. All programs have --help. S
上节课和大家介绍了Gitlab CI 结合 Kubernetes 进行 CI/CD 的完整过程。这节课结合前面所学的知识点给大家介绍一个完整的示例:使用 Jenkins + Gitlab + Harbor + Helm + Kubernetes 来实现一个完整的 CI/CD 流水线作业。 其实前面的课程中我们就已经学习了 Jenkins Pipeline 与 Kubernetes 的完美结合,我们
翻译:Ranger Tsao 简介 Docker 是一个可以将应用部署在其中的轻量级、隔离的容器。应用程序并行运行在隔离的 Linux 容器中。如果从未使用过 Docker ,可以根据官方教程,轻松入门 Vert.x 提供两个 Docker 镜像给开发人员运行部署程序,分别是: vertx/vertx3 基础镜像,需要进行一些扩展才可以运行程序 vertx/vertx3-exec 给宿主系统提供
ks-devops 是基于 Kubernetes 的 DevOps 平台。 特性 开箱即用的 CI/CD 管道 用于使用 Kubernetes 进行 DevOps 的内置自动化工具包 使用 Jenkins Pipelines 在 Kubernetes 之上实现 DevOps 通过 CLI 管理管道
Learn DevOps Learn the craft of "DevOps" (Developer Operations)to easily/reliably deploy your App and keep it Up! Why? You should learn more "advanced" DevOps if: You / your team have "out-grown"Herok