joegarciar3a3294

joined 6 days ago

it is actually possible to add non-root iam accounts using your root iam account. however, in order to do that, you first have to create an access key for your root user, which is not recommended by aws.

the method is to manually create a non-root user, add an initial iam permission to modify the iam resources, and automate the additional iam permission agregation via script in gitlab-ci.yml

[–] joegarciar3a3294@lemmy.world 3 points 4 days ago (4 children)

it's just for my portfolio. it's like self-hosting for enterprise

 

i am not a devops engineer. i appreciate any critique or correction.

code: gitlab github

Deploying Nextcloud on AWS ECS with Pulumi

This Pulumi programme deploys a highly-available, cost-effective Nextcloud service on AWS Fargate with a serverless Aurora PostgreSQL database.

Deployment Option 1 (GitOps)

The first few items are high-level instructions only. You can follow the instructions from the hyperlinked web pages. They include the best practices as recommended by the authors.

  1. A Pulumi account. This is for creating a Personal Access Token that is required when provisioning the AWS resources.
  2. Create a non-root AWS IAM User called pulumi-user.
  3. Create an IAM User Group called pulumi-group
  4. Add the pulumi-user to the pulumi-group User Group.
  5. Attach the IAMFullAccess policy to pulumi-group. The IAMFullAccess allows your IAM User to add the remaining required IAM policies to the IAM User Group using the automation script later.
  6. Create an access key for your non-root IAM User.
  7. On your Pulumi account, go to Personal access tokens and create a token.
  8. Also create a password for the Aurora Database. You can use a password generator.
  9. Clone this repository either to your GitLab or GitHub.
  10. This works either on GitLab CI/CD or GitHub Actions. On GitLab, go to the cloned repository settings > Settings > Variables. On GitHub, go to the cloned repository settings > Secrets and variables > Actions > Secrets.
  11. Store the credentials from steps 6-8 as AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, PULUMI_ACCESS_TOKEN, and POSTGRES_PASSWORD. These will be used as environment variables by the deployment script.
  12. On AWS Console, go to EC2 > Load Balancers. The DNS name is where you access the Nextcloud Web Interface to establish your administrative credentials.

[!NOTE] The automatic deployment will be triggered if there are changes made on the main.go, .gitlab-ci.yml, or the ci.yml file upon doing a git push. On main.go, you can adjust the specifications of the resources to be manifested. Notable ones are in lines 327, 328, 571, 572, 602, 603, 640.

Deployment Option 2 (Manual)

  1. Install Go, AWS CLI, and Pulumi.
  2. Follow steps 1-8 above.
  3. Add the required IAM policies to the IAM User Group to allow Pulumi to interact with AWS resources:
printf '%s\n' "arn:aws:iam::aws:policy/AmazonS3FullAccess" "arn:aws:iam::aws:policy/AmazonECS_FullAccess" "arn:aws:iam::aws:policy/ElasticLoadBalancingFullAccess" "arn:aws:iam::aws:policy/CloudWatchEventsFullAccess" "arn:aws:iam::aws:policy/AmazonEC2FullAccess" "arn:aws:iam::aws:policy/AmazonVPCFullAccess" "arn:aws:iam::aws:policy/SecretsManagerReadWrite" "arn:aws:iam::aws:policy/AmazonElasticFileSystemFullAccess" "arn:aws:iam::aws:policy/AmazonRDSFullAccess" | xargs -I {} aws iam attach-group-policy --group-name pulumi-group --policy-arn {}
  1. Add the environment variables.
export PULUMI_ACCESS_TOKEN="value" && export AWS_ACCESS_KEY_ID="value" && export AWS_SECRET_ACCESS_KEY="value" && export POSTGRES_PASSWORD="value"
  1. Clone the repository locally and deploy.
mkdir pulumi-aws && \
cd pulumi-aws && \
pulumi new aws-go && \
rm * && \
git clone https://gitlab.com/joevizcara/pulumi-aws.git . && \
pulumi up

Deprovisioning

pulumi destroy --yes

Local Testing

The Pulumi.aws-go-dev.yaml file contains a code block to use with Localstack for local testing.

Features

  1. Subscription-free application - Nextcloud is a free and open-source cloud storage and file-sharing platform.
  2. Serverless management - using Fargate and Aurora Serverless reduces infrastructure management.
  3. Reduced cost - can be scaled and as highly available as an AWS EKS cluster, but with cost lower per-hour.
  4. Go coding language - a popular language for cloud-native applications, eliminating syntax barriers for engineers.

Diagramme

[–] joegarciar3a3294@lemmy.world 0 points 6 days ago (3 children)

i agree. they should be run in a confined env first

 

i am not a devops engineer. i appreciate any critique or correction.

code: gitlab github

Managing Proxmox VE via Terraform and GitOps

This program enables a declarative, IaC method of provisioning multiple resources in a Proxmox Virtual Environment.

Deployment

  1. Clone this GitLab/Hub repository.

  2. Go to the GitLab Project/Repository > Settings > CI/CD > Runner > Create project runner, mark Run untagged jobs and click Create runner.

  3. On Step 1, copy the runner authentication token, store it somewhere and click View runners.

  4. On the PVE Web UI, right-click on the target Proxmox node and click Shell.

  5. Execute this command in the PVE shell.

bash <(curl -s https://gitlab.com/joevizcara/terraform-proxmox/-/raw/master/prep.sh)

[!CAUTION] The content of this shell script can be examined before executing it. It can be executed on a virtualized Proxmox VE to observe what it does. It will create a privileged PAM user to authenticate via an API token. It creates a small LXC environment for GitLab Runner to manage the Proxmox resources. Because of the API limitations between the Terraform provider and PVE, it will necessitate to add the SSH public key from the LXC to the authorized keys of the PVE node to write the cloud-init configuration YAML files to the local Snippets datastore. It will also add a few more data types that can be accepeted in the local datastore (e.g. Snippets, Import). Consider enabling two-factor authentication on GitLab if this is to be applied on a real environment.

  1. Go to GitLab Project/Repository > Settings > CI/CD > Variables > Add variable:

Key: PM_API_TOKEN_SECRET
Value: the token secret value from credentials.txt

  1. If this repository is cloned locally, adjust the values of the .tf files to conform with the PVE onto which this will be deployed.

[!NOTE] The Terraform provider resgistry is bpg/proxmox for reference. git push signals will trigger the GitLab Runner and will apply the infrastructure changes.

  1. If the first job stage succeeded, go to GitLab Project/Repository > Build > Jobs and click Run ▶️ button of the apply infra job.

  2. If the second job stage succeeded, go to the PVE WUI to start the new VMs to test or configure.

[!NOTE] To configure the VMs, go to PVE WUI and right-click the gitlab-runner LXC and click Console. The GitLab Runner LXC credentials are in the credentials.txt. Inside the console, do ssh k3s@<ip-address-of-the-VM>. They can be converted into Templates, converted into an HA cluster, etc. The IP addresses are declared in variables.tf.

Diagramme

diagramme

 

i'm not a devops engineer. i appreciate any critique or correction.

code: gitlab github

Managing Proxmox VE via Terraform and GitOps

This program enables a declarative, IaC method of provisioning multiple resources in a Proxmox Virtual Environment.

Deployment

  1. Clone this GitLab/Hub repository.

  2. Go to the GitLab Project/Repository > Settings > CI/CD > Runner > Create project runner, mark Run untagged jobs and click Create runner.

  3. On Step 1, copy the runner authentication token, store it somewhere and click View runners.

  4. On the PVE Web UI, right-click on the target Proxmox node and click Shell.

  5. Execute this command in the PVE shell.

bash <(curl -s https://gitlab.com/joevizcara/terraform-proxmox/-/raw/master/prep.sh)

[!CAUTION] The content of this shell script can be examined before executing it. It can be executed on a virtualized Proxmox VE to observe what it does. It will create a privileged PAM user to authenticate via an API token. It creates a small LXC environment for GitLab Runner to manage the Proxmox resources. Because of the API limitations between the Terraform provider and PVE, it will necessitate to add the SSH public key from the LXC to the authorized keys of the PVE node to write the cloud-init configuration YAML files to the local Snippets datastore. It will also add a few more data types that can be accepeted in the local datastore (e.g. Snippets, Import). Consider enabling two-factor authentication on GitLab if this is to be applied on a real environment.

  1. Go to GitLab Project/Repository > Settings > CI/CD > Variables > Add variable:

Key: PM_API_TOKEN_SECRET
Value: the token secret value from credentials.txt

  1. If this repository is cloned locally, adjust the values of the .tf files to conform with the PVE onto which this will be deployed.

[!NOTE] The Terraform provider resgistry is bpg/proxmox for reference. git push signals will trigger the GitLab Runner and will apply the infrastructure changes.

  1. If the first job stage succeeded, go to GitLab Project/Repository > Build > Jobs and click Run ▶️ button of the apply infra job.

  2. If the second job stage succeeded, go to the PVE WUI to start the new VMs to test or configure.

[!NOTE] To configure the VMs, go to PVE WUI and right-click the gitlab-runner LXC and click Console. The GitLab Runner LXC credentials are in the credentials.txt. Inside the console, do ssh k3s@<ip-address-of-the-VM>. They can be converted into Templates, converted into an HA cluster, etc. The IP addresses are declared in variables.tf.

Diagramme

diagramme