After patching a CentOS 7 server with the latest rpms, ssh would not authenticate with Active Directory.
To debug,
1. Become root on the Unix server.
sudo su -
2. Stop the sshd service (Note: this will not kill your current session)
systemctl stop sshd
3. Start sshd in debug mode. The debug output will print on the terminal
/sbin/sshd -d
4. From another terminal, ssh into the server
ssh username@server
5. The sshd debug messages showed that the username could not get authenticated with AD. The first place to look is sssd (System Security Services Daemon)
6. Restart sssd. Got an error message stating sssd failed to start
systemctl restart sssd
7. First view the sssd error log status. The logs did not provide much debug info
systemctl -l status sssd
8. Start sssd in debug mode. The debug output will print on the terminal
sssd -i -d 4
9. The error message in this case was "PAM unable to dlopen /usr/lib/samba/libreplace-samba4.so: version 'SAMBA_4.4.4 not found"
10. Checked the version of the samba-client. This showed that yum update installed both samba-client 4.4.4 and 4.6.2
yum --showduplicates list samba-client
11. Reinstalled samba-client to only have one version
yum remove samba-client
yum install samba-client
12. sssd now success starts and users can AS authenticate on ssh
systemctl start sssd
systemctl start sshd
SyntaxHighlighter JS
2017-11-28
2017-07-30
My quick Boston DevOps talk
I gave a quick talk at the Boston DevOps MeetUp on July 26, 2017 about how my Ansible installer side project became the official product installer at HealthEdge.
https://www.youtube.com/watch?v=Q7sDcU4--l8&t
https://www.youtube.com/watch?v=Q7sDcU4--l8&t
Forcing Oracle to drop user
If there are sessions currently connected to an Oracle user schema, then you cannot drop the user. For example, if the Oracle user 'mytest' is currently logged in, then
DROP USER mytest CASCADE;
ORA-01940: cannot drop a user that is currently connected
If you cannot get the Oracle user to voluntarily log off, then as the Oracle admin, you will have to kill the Oracle user session.
There are two ways to kill an Oracle session
ALTER SYSTEM KILL SESSION 'sid,serial#' IMMEDIATE;
ALTER SYSTEM DISCONNECT SESSION 'sid,serial#' IMMEDIATE;
With "kill session", Oracle will request the session to end. With "disconnect session", Oracle will terminate the server process associated with the session.
The Oracle script at https://github.com/juttayaya/oracle/blob/master/drop-user/drop_user_force.sql will both kill and disconnect all Oracle sessions for a user then drop it. Example of usage:
sqlplus sys as sysdba @drop_user_force.sql mytest
If you are using AWS Oracle RDS, then use the equivalent script https://github.com/juttayaya/oracle/blob/master/drop-user/drop_user_force_aws_rds.sql . Example of usage:
sqlplus sys as sysdba@aws-rds-hostname:1521/ORCL @drop_user_force_aws_rds.sql mytest
DROP USER mytest CASCADE;
ORA-01940: cannot drop a user that is currently connected
If you cannot get the Oracle user to voluntarily log off, then as the Oracle admin, you will have to kill the Oracle user session.
There are two ways to kill an Oracle session
ALTER SYSTEM KILL SESSION 'sid,serial#' IMMEDIATE;
ALTER SYSTEM DISCONNECT SESSION 'sid,serial#' IMMEDIATE;
The Oracle script at https://github.com/juttayaya/oracle/blob/master/drop-user/drop_user_force.sql will both kill and disconnect all Oracle sessions for a user then drop it. Example of usage:
sqlplus sys as sysdba @drop_user_force.sql mytest
If you are using AWS Oracle RDS, then use the equivalent script https://github.com/juttayaya/oracle/blob/master/drop-user/drop_user_force_aws_rds.sql . Example of usage:
sqlplus sys as sysdba@aws-rds-hostname:1521/ORCL @drop_user_force_aws_rds.sql mytest
2017-07-22
Ansible ssh connection closed error
Problem:
Ansible returns the error "Failed to connect to the host via ssh: Shared connection to host closed"
Solution:
Modify ansible.cfg to send a SSH keep-alive ping every 2 minutes.
ssh_args = -C -o ControlMaster=auto -o ControlPersist=60s -o ServerAliveInterval=120
Ansible returns the error "Failed to connect to the host via ssh: Shared connection to host closed"
Solution:
Modify ansible.cfg to send a SSH keep-alive ping every 2 minutes.
ssh_args = -C -o ControlMaster=auto -o ControlPersist=60s -o ServerAliveInterval=120
2017-07-09
AWS terraform: Use DynamoDB locking
Please see the previous post on how to set up terraform to use a remote AWS S3 bucket to store the terraform.tfstate file (http://www.javajirawat.com/2017/07/aws-terraform-use-s3-remote-tfstate-file.html) . This example continues from the S3 bucket article.
Remote terraform state file allows multiple terraform servers to manage the same resources. However if two people try to modify the same terraform state at the same time, it may lead to corruption and errors. Terraform can be configured to use AWS DynamoDB to lock the state file and prevent concurrent edits.
AWS Dynamo DB is a cloud NoSQL key-value database.
You can name the DynamoDB table to anything you wish. The hash_key must be a String attribute named LockID
2.) Execute main.tf to create the DynamoDB table on AWS
Run the command
terraform apply
The AWS account that executes terraform needs AmazonDynamoDBFullAccess permission in the region you are creating the database table
https://console.aws.amazon.com/iam/home?region=us-east-1#/policies/arn:aws:iam::aws:policy/AmazonDynamoDBFullAccess
The dynamodb_table value must match the name of the DynamoDB table we created.
2.) Initialize the terraform S3 backend
Run the command
terraform init
Type in "yes" for any prompt.
3.) Execute main.tf to create the EC2 server on AWS
Run the command
terraform apply
The AWS account that executes terraform needs AmazonEC2FullAccess permission in the region you are creating the EC2 server
https://console.aws.amazon.com/iam/home?region=us-east-1#/policies/arn:aws:iam::aws:policy/AmazonEC2FullAccess
Remote terraform state file allows multiple terraform servers to manage the same resources. However if two people try to modify the same terraform state at the same time, it may lead to corruption and errors. Terraform can be configured to use AWS DynamoDB to lock the state file and prevent concurrent edits.
AWS Dynamo DB is a cloud NoSQL key-value database.
Setup:
Create an AWS DynamoDB with terraform to lock the terraform.tfstate.
1.) Create terraform main.tf for AWS DynamoDB
See https://github.com/juttayaya/devops/blob/master/hashicorp/terraform/s3-tfstate-example/dynamodb/main.tf
See https://github.com/juttayaya/devops/blob/master/hashicorp/terraform/s3-tfstate-example/dynamodb/main.tf
provider "aws" { region = "us-east-1" } resource "aws_dynamodb_table" "dynamodb-terraform-lock-example" { name = "terraform-lock-example" hash_key = "LockID" read_capacity = 5 write_capacity = 5 attribute { name = "LockID" type = "S" } tags { Name = "Terraform Lock Table Example" Org = "JavaJirawat" } }
You can name the DynamoDB table to anything you wish. The hash_key must be a String attribute named LockID
2.) Execute main.tf to create the DynamoDB table on AWS
Run the command
terraform apply
The AWS account that executes terraform needs AmazonDynamoDBFullAccess permission in the region you are creating the database table
https://console.aws.amazon.com/iam/home?region=us-east-1#/policies/arn:aws:iam::aws:policy/AmazonDynamoDBFullAccess
Usage:
Here is an example of using the DynamoDB table we just created to lock a terraform.tfstate for a AWS EC2 resource.
1.) Create terraform main.tf for AWS EC2 server with a S3 backend to store the terraform.tfstate file and a DynamoDB table to lock it.
terraform { backend "s3" { bucket = "terraform-s3-tfstate-example" region = "us-east-1" key = "example/ec2-with-locking/terraform.tfstate" dynamodb_table = "terraform-lock-example" encrypt = true } } provider "aws" { region = "us-east-1" } # Amazon Linux AMI resource "aws_instance" "ec2-with-locking-example" { count = 1 ami = "ami-a4c7edb2" instance_type = "t2.micro" lifecycle { create_before_destroy = true } tags { Name = "Example for DynamoDB lock" Org = "JavaJirawat" } }
The dynamodb_table value must match the name of the DynamoDB table we created.
2.) Initialize the terraform S3 backend
Run the command
terraform init
Type in "yes" for any prompt.
3.) Execute main.tf to create the EC2 server on AWS
Run the command
terraform apply
The AWS account that executes terraform needs AmazonEC2FullAccess permission in the region you are creating the EC2 server
https://console.aws.amazon.com/iam/home?region=us-east-1#/policies/arn:aws:iam::aws:policy/AmazonEC2FullAccess
AWS Terraform: Use S3 remote tfstate file
Terraform uses an text file called terraform.tfstate to store the state of the infrastructure it manages. (https://www.terraform.io/docs/state/) . If multiple terraform servers manage the same resources, this file needs to be remotely accessible. To prevent accidental deletion or corruption, terraform.tfstate should be versioned.
Amazon S3 (Simple Storage Service) fulfills the above requirements. S3 is a cloud file storage service; basically the AWS version of Dropbox.
1.) Create terraform main.tf for AWS S3 bucket.
See https://github.com/juttayaya/devops/blob/master/hashicorp/terraform/s3-tfstate-example/s3/main.tf
The above configuration turns on S3 versioning so you can query for the history of infrastructure changes. The prevent_destroy = true guards against accidental deletion.
2.) Execute main.tf to create the S3 bucket on AWS
Run the command
terraform apply
The AWS account that executes terraform needs AmazonS3FullAccess permission in the region you are creating the S3 bucket
https://console.aws.amazon.com/iam/home?region=us-east-1#/policies/arn:aws:iam::aws:policy/AmazonS3FullAccess
The terraform backend bucket name and region must match the S3 bucket name and region we created. The key is the full folder path and filename to store the terraform.tfstate file
2.) Initialize the terraform S3 backend
Run the command
terraform init
Type in "yes" for any prompt.
3.) Execute main.tf to create the EC2 server on AWS
Run the command
terraform apply
The AWS account that executes terraform needs AmazonEC2FullAccess permission in the region you are creating the EC2 server
https://console.aws.amazon.com/iam/home?region=us-east-1#/policies/arn:aws:iam::aws:policy/AmazonEC2FullAccess
Amazon S3 (Simple Storage Service) fulfills the above requirements. S3 is a cloud file storage service; basically the AWS version of Dropbox.
Setup:
Create an AWS S3 bucket with terraform to store terraform.tfstate1.) Create terraform main.tf for AWS S3 bucket.
See https://github.com/juttayaya/devops/blob/master/hashicorp/terraform/s3-tfstate-example/s3/main.tf
provider "aws" { region = "us-east-1" } resource "aws_s3_bucket" "s3-tfstate-example" { bucket = "terraform-s3-tfstate-example" acl = "private" versioning { enabled = true } lifecycle { prevent_destroy = true } tags { Name = "Terraform S3 tfstate Example" Org = "JavaJirawat" } }
The above configuration turns on S3 versioning so you can query for the history of infrastructure changes. The prevent_destroy = true guards against accidental deletion.
2.) Execute main.tf to create the S3 bucket on AWS
Run the command
terraform apply
The AWS account that executes terraform needs AmazonS3FullAccess permission in the region you are creating the S3 bucket
https://console.aws.amazon.com/iam/home?region=us-east-1#/policies/arn:aws:iam::aws:policy/AmazonS3FullAccess
Usage:
Here is an example of using the S3 bucket we just created to store a terraform.tfstate for a AWS EC2 resource.
1.) Create terraform main.tf for AWS EC2 server with a S3 backend to store the terraform.tfstate file.
1.) Create terraform main.tf for AWS EC2 server with a S3 backend to store the terraform.tfstate file.
See https://github.com/juttayaya/devops/blob/master/hashicorp/terraform/s3-tfstate-example/ec2/main.tf
terraform { backend "s3" { bucket = "terraform-s3-tfstate-example" region = "us-east-1" key = "example/ec2/terraform.tfstate" encrypt = true } } provider "aws" { region = "us-east-1" } # Amazon Linux AMI resource "aws_instance" "ec2-example" { count = 1 ami = "ami-a4c7edb2" instance_type = "t2.micro" lifecycle { create_before_destroy = true } tags { Name = "Example for S3 tfstate" Org = "JavaJirawat" } }
The terraform backend bucket name and region must match the S3 bucket name and region we created. The key is the full folder path and filename to store the terraform.tfstate file
2.) Initialize the terraform S3 backend
Run the command
terraform init
Type in "yes" for any prompt.
3.) Execute main.tf to create the EC2 server on AWS
Run the command
terraform apply
The AWS account that executes terraform needs AmazonEC2FullAccess permission in the region you are creating the EC2 server
https://console.aws.amazon.com/iam/home?region=us-east-1#/policies/arn:aws:iam::aws:policy/AmazonEC2FullAccess
Subscribe to:
Posts (Atom)