Skip to content

Commit

Permalink
Merge pull request #94 from minrk/update-permissions
Browse files Browse the repository at this point in the history
Update some s3 permissions
  • Loading branch information
minrk authored Oct 23, 2024
2 parents 391858e + b5b2b01 commit 90f5b40
Show file tree
Hide file tree
Showing 2 changed files with 60 additions and 8 deletions.
32 changes: 25 additions & 7 deletions docs/admin_hub.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,12 +42,30 @@ The current list of authorized GFTS users can be found in [`gfts-track-reconstru

### Giving access to the GFTS Hub and S3 Buckets (Admin only)

While everyone can initiate a Pull Request to add a new user, only a few administrators can grant access (especially write access) to S3 Buckets. Below are the steps to follow if you are an administrator:
Everyone can initiate a Pull Request to add a new user with read-only access to `gfts-reference-data` and `destine-gfts-data-lake`.
There is only one step:

1. Add the new user (github username) in **lowercase** in `gfts-track-reconstruction/jupyterhub/gfts-hub/values.yaml`;
2. Add the github username (lowercase) in `gfts-track-reconstruction/jupyterhub/tofu/main.tf`: adding the username to `s3_readonly_users` will grant readonly access to `gfts-reference-data` and `destine-gfts-data-lake` S3 buckets. If the user needs write access to the reference-data S3 bucket, add their username to `s3_ifremer_developers`. If the user only needs read access to reference-data but write access to `gfts-ifremer`, add their username to `s3_ifremer_users` instead.
3. Run `tofu apply` to apply the S3 permissions. Ensure you are in the `gfts-track-reconstruction/jupyterhub/tofu` folder before executing the `tofu` command.
4. Update `gfts-track-reconstruction/jupyterhub/secrets/config.yaml` with the output of the command `tofu outpout -json s3_credentials_json`. This command needs to be executed in the `tofu` folder after applying the S3 permissions with `tofu apply`. If the file contains binary content, it means you do not have the rights to add new users to the GFTS S3 buckets and will need to ask a GFTS admin for assistance.
1. Add the new user (github username) in **lowercase** in `gfts-track-reconstruction/jupyterhub/gfts-hub/values.yaml`

When the PR is merged, the github user will have read-only access to `gfts-reference-data` and `destine-gfts-data-lake` and will be able to:

```python
import s3fs
s3 = s3fs.S3FileSystem(anon=False)
s3.listdir("gfts-reference-data")
```

To grant read access to private data or write access, the user must be added to an s3 group in the `tofu` configuration,
adding the following steps which can only be done by a GFTS Hub admin:

2. Add the github username (lowercase) in one of the `s3_` groups in `gfts-track-reconstruction/jupyterhub/tofu/main.tf` for the following permissions:

- `s3_ifremer_developers`: write access to `gfts-ifremer` and `gfts-reference-data`
- `s3_ifremer_users`: write access to `gfts-ifremer` only
- `s3_admins`: admin access to all s3 buckets

3. Run `tofu apply` to apply the S3 permissions. Ensure you are in the `gfts-track-reconstruction/jupyterhub/tofu` folder before executing the `tofu` command and have run `source secrets/ovh-creds.sh`.
4. Update `gfts-track-reconstruction/jupyterhub/secrets/config.yaml` with the output of the command `tofu output -json s3_credentials_json`. This command needs to be executed in the `tofu` folder after applying the S3 permissions with `tofu apply`. If the file contains binary content, it means you do not have the rights to add new users to the GFTS S3 buckets and will need to ask a GFTS admin for assistance.
5. Don't forget to commit and push your changes!

Steps 3 and 4 are what actually grant the jupyterhub user s3 access.
Expand All @@ -62,7 +80,7 @@ The following packages need to be installed on your system:

As an admin, you'll need to set up your environment. The GFTS maintainer will provide you with a key encrypted with your GitHub SSH key. Save the content sent by the GFTS maintainer into a file, and name it `ssh-vault.txt`. At the moment, the keys are known to [annefou](https://github.com/annefou) and [minrk](https://github.com/minrk).

```
```bash
cat ssh-vault.txt | ssh-vault view | base64 --decode > keyfile && git-crypt unlock keyfile && rm keyfile
```

Expand All @@ -72,7 +90,7 @@ Thanks to the previous command, you should be able to `cat gfts-track-reconstruc

Finally to initialize your environment and execute `tofu` commands, you need to change the directory to the `gfts-track-reconstruction/jupyterhub/tofu` folder and source `secrets/ovh-creds.sh` e.g.:

```
```bash
source secrets/ovh-creds.sh
tofu init
tofu apply
Expand Down
36 changes: 35 additions & 1 deletion gfts-track-reconstruction/jupyterhub/tofu/main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@ locals {
"gfts-reference-data",
"destine-gfts-data-lake",
])

# users must appear in only one of these sets
# because each user can have exactly one policy
s3_readonly_users = toset([
Expand Down Expand Up @@ -90,6 +90,18 @@ locals {
"s3:ListMultipartUploadParts", "s3:ListBucketMultipartUploads",
"s3:AbortMultipartUpload", "s3:GetBucketLocation",
]

# default-deny policy
# disallows bucket creation
s3_default_deny = {
"Sid" : "default-deny",
"Effect" : "Deny",
"Action" : [
"s3:CreateBucket",
"s3:DeleteBucket",
],
"Resource" : ["arn:aws:s3:::*"]
}
}

####### s3 buckets #######
Expand Down Expand Up @@ -167,10 +179,13 @@ resource "ovh_cloud_project_user_s3_policy" "s3_users" {
"Effect" : "Allow",
"Action" : local.s3_readonly_action,
"Resource" : [
"arn:aws:s3:::${aws_s3_bucket.gfts-data-lake.id}",
"arn:aws:s3:::${aws_s3_bucket.gfts-data-lake.id}/*",
"arn:aws:s3:::${aws_s3_bucket.gfts-reference-data.id}",
"arn:aws:s3:::${aws_s3_bucket.gfts-reference-data.id}/*",
]
},
local.s3_default_deny,
]
})
}
Expand All @@ -186,7 +201,9 @@ resource "ovh_cloud_project_user_s3_policy" "s3_ifremer_users" {
"Effect" : "Allow",
"Action" : local.s3_readonly_action,
"Resource" : [
"arn:aws:s3:::${aws_s3_bucket.gfts-data-lake.id}",
"arn:aws:s3:::${aws_s3_bucket.gfts-data-lake.id}/*",
"arn:aws:s3:::${aws_s3_bucket.gfts-reference-data.id}",
"arn:aws:s3:::${aws_s3_bucket.gfts-reference-data.id}/*",
]
},
Expand All @@ -195,9 +212,11 @@ resource "ovh_cloud_project_user_s3_policy" "s3_ifremer_users" {
"Effect" : "Allow",
"Action" : local.s3_admin_action,
"Resource" : [
"arn:aws:s3:::${aws_s3_bucket.gfts-ifremer.id}",
"arn:aws:s3:::${aws_s3_bucket.gfts-ifremer.id}/*",
]
},
local.s3_default_deny,
]
})
}
Expand All @@ -213,7 +232,9 @@ resource "ovh_cloud_project_user_s3_policy" "s3_ifremer_developers" {
"Effect" : "Allow",
"Action" : local.s3_readonly_action,
"Resource" : [
"arn:aws:s3:::${aws_s3_bucket.gfts-data-lake.id}",
"arn:aws:s3:::${aws_s3_bucket.gfts-data-lake.id}/*",
"arn:aws:s3:::${aws_s3_bucket.gfts-reference-data.id}",
"arn:aws:s3:::${aws_s3_bucket.gfts-reference-data.id}/*",
]
},
Expand All @@ -222,10 +243,13 @@ resource "ovh_cloud_project_user_s3_policy" "s3_ifremer_developers" {
"Effect" : "Allow",
"Action" : local.s3_admin_action,
"Resource" : [
"arn:aws:s3:::${aws_s3_bucket.gfts-ifremer.id}",
"arn:aws:s3:::${aws_s3_bucket.gfts-ifremer.id}/*",
"arn:aws:s3:::${aws_s3_bucket.gfts-reference-data.id}",
"arn:aws:s3:::${aws_s3_bucket.gfts-reference-data.id}/*",
]
},
local.s3_default_deny,
]
})
}
Expand Down Expand Up @@ -288,6 +312,15 @@ resource "aws_s3_bucket_acl" "gfts-reference-data" {
}
}

# everyone authenticated can read reference data
grant {
grantee {
type = "Group"
uri = "http://acs.amazonaws.com/groups/global/AuthenticatedUsers"
}
permission = "READ"
}

dynamic "grant" {
for_each = local.s3_ifremer_users
content {
Expand Down Expand Up @@ -613,6 +646,7 @@ provider "harbor" {

resource "harbor_project" "registry" {
name = "gfts"
public = true
}

resource "harbor_robot_account" "builder" {
Expand Down

0 comments on commit 90f5b40

Please sign in to comment.