Skip to content

Expose SUR APIs #2220

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 10 commits into from
May 12, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 4 additions & 3 deletions .github/scripts/end2end/configs/zenko.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -82,6 +82,10 @@ spec:
enable: false
scuba:
replicas: 1
api:
replicas: 1
ingress:
hostname: ${ZENKO_SUR_INGRESS}
management:
provider: InCluster
ui:
Expand Down Expand Up @@ -123,9 +127,6 @@ spec:
azure:
archiveTier: "hot"
restoreTimeout: "15s"
scuba:
logging:
logLevel: debug
ingress:
workloadPlaneClass: 'nginx'
controlPlaneClass: 'nginx-control-plane'
Expand Down
2 changes: 1 addition & 1 deletion .github/scripts/end2end/configure-e2e-ctst.sh
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ UUID=${UUID%.*}
UUID=${UUID:1}

echo "127.0.0.1 iam.zenko.local ui.zenko.local s3-local-file.zenko.local keycloak.zenko.local \
sts.zenko.local management.zenko.local s3.zenko.local website.mywebsite.com" | sudo tee -a /etc/hosts
sts.zenko.local management.zenko.local s3.zenko.local website.mywebsite.com utilization.zenko.local" | sudo tee -a /etc/hosts

# Add bucket notification target
envsubst < ./configs/notification_destinations.yaml | kubectl apply -f -
Expand Down
4 changes: 3 additions & 1 deletion .github/scripts/end2end/deploy-zenko.sh
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,7 @@ export ZENKO_STS_INGRESS=${ZENKO_STS_INGRESS:-'sts.zenko.local'}
export ZENKO_MANAGEMENT_INGRESS=${ZENKO_MANAGEMENT_INGRESS:-'management.zenko.local'}
export ZENKO_S3_INGRESS=${ZENKO_S3_INGRESS:-'s3.zenko.local'}
export ZENKO_UI_INGRESS=${ZENKO_UI_INGRESS:-'ui.zenko.local'}
export ZENKO_SUR_INGRESS=${ZENKO_SUR_INGRESS:-'utilization.zenko.local'}

export BACKBEAT_LCC_CRON_RULE=${BACKBEAT_LCC_CRON_RULE:-'*/5 * * * * *'}

Expand All @@ -30,7 +31,8 @@ if [ ${ENABLE_KEYCLOAK_HTTPS} == 'true' ]; then
- ${ZENKO_UI_INGRESS}
- ${ZENKO_MANAGEMENT_INGRESS}
- ${ZENKO_IAM_INGRESS}
- ${ZENKO_STS_INGRESS}"
- ${ZENKO_STS_INGRESS}
- ${ZENKO_SUR_INGRESS}"
else
export ZENKO_INGRESS_ANNOTATIONS="annotations:
nginx.ingress.kubernetes.io/proxy-body-size: 0m"
Expand Down
1 change: 1 addition & 0 deletions .github/scripts/end2end/patch-coredns.sh
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,7 @@ corefile="
rewrite name exact ui.zenko.local ingress-nginx-controller.ingress-nginx.svc.cluster.local
rewrite name exact management.zenko.local ingress-nginx-controller.ingress-nginx.svc.cluster.local
rewrite name exact s3.zenko.local ingress-nginx-controller.ingress-nginx.svc.cluster.local
rewrite name exact utilization.zenko.local ingress-nginx-controller.ingress-nginx.svc.cluster.local
rewrite name exact sts.zenko.local ingress-nginx-controller.ingress-nginx.svc.cluster.local
rewrite name exact iam.zenko.local ingress-nginx-controller.ingress-nginx.svc.cluster.local
rewrite name exact shell-ui.zenko.local ingress-nginx-controller.ingress-nginx.svc.cluster.local
Expand Down
1 change: 1 addition & 0 deletions .github/scripts/end2end/prepare-pra.sh
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,7 @@ echo 'ZENKO_STS_INGRESS="sts.dr.zenko.local"' >> "$GITHUB_ENV"
echo 'ZENKO_MANAGEMENT_INGRESS="management.dr.zenko.local"' >> "$GITHUB_ENV"
echo 'ZENKO_S3_INGRESS="s3.dr.zenko.local"' >> "$GITHUB_ENV"
echo 'ZENKO_UI_INGRESS="ui.dr.zenko.local"' >> "$GITHUB_ENV"
echo 'ZENKO_SUR_INGRESS="utilization.dr.zenko.local"' >> "$GITHUB_ENV"

MONGODB_ROOT_USERNAME="${MONGODB_ROOT_USERNAME:-'root'}"
MONGODB_ROOT_PASSWORD="${MONGODB_ROOT_PASSWORD:-'rootpass'}"
Expand Down
8 changes: 7 additions & 1 deletion .github/scripts/end2end/run-e2e-ctst.sh
Original file line number Diff line number Diff line change
Expand Up @@ -74,6 +74,10 @@ BACKBEAT_API_PORT=$(kubectl get secret -l app.kubernetes.io/name=connector-cloud
KAFKA_CLEANER_INTERVAL=$(kubectl get zenko ${ZENKO_NAME} -o jsonpath='{.spec.kafkaCleaner.interval}')
SORBETD_RESTORE_TIMEOUT=$(kubectl get zenko ${ZENKO_NAME} -o jsonpath='{.spec.sorbet.server.azure.restoreTimeout}')

# Utilization service
UTILIZATION_SERVICE_HOST=$(kubectl get zenko ${ZENKO_NAME} -o jsonpath='{.spec.scuba.api.ingress.hostname}')
UTILIZATION_SERVICE_PORT="80"

# Setting CTST world params
WORLD_PARAMETERS="$(jq -c <<EOF
{
Expand Down Expand Up @@ -122,7 +126,9 @@ WORLD_PARAMETERS="$(jq -c <<EOF
"SorbetdRestoreTimeout":"${SORBETD_RESTORE_TIMEOUT}",
"TimeProgressionFactor":"${TIME_PROGRESSION_FACTOR}",
"DRAdminAccessKey":"${DR_ADMIN_ACCESS_KEY_ID}",
"DRAdminSecretKey":"${DR_ADMIN_SECRET_ACCESS_KEY}"
"DRAdminSecretKey":"${DR_ADMIN_SECRET_ACCESS_KEY}",
"UtilizationServiceHost":"${UTILIZATION_SERVICE_HOST}",
"UtilizationServicePort":"${UTILIZATION_SERVICE_PORT}"
}
EOF
)"
Expand Down
4 changes: 2 additions & 2 deletions VERSION
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
VERSION="2.11.6"
VERSION="2.12.0"

VERSION_SUFFIX=
VERSION_SUFFIX=-preview.1

VERSION_FULL="${VERSION}${VERSION_SUFFIX}"
4 changes: 2 additions & 2 deletions solution/deps.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -106,7 +106,7 @@ scuba:
sourceRegistry: ghcr.io/scality
dashboard: scuba/scuba-dashboards
image: scuba
tag: 1.0.11
tag: 1.1.0-preview.9
envsubst: SCUBA_TAG
sorbet:
sourceRegistry: ghcr.io/scality
Expand Down Expand Up @@ -136,7 +136,7 @@ vault:
zenko-operator:
sourceRegistry: ghcr.io/scality
image: zenko-operator
tag: v1.7.3
tag: v1.7.4
envsubst: ZENKO_OPERATOR_TAG
zenko-ui:
sourceRegistry: ghcr.io/scality
Expand Down
1 change: 1 addition & 0 deletions tests/ctst/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,7 @@ COPY ./steps /ctst/steps
COPY ./world /ctst/world

USER root
RUN npm install [email protected] -g

RUN chmod 0777 -R /tmp/
RUN chmod 0777 -R /ctst/
Expand Down
5 changes: 5 additions & 0 deletions tests/ctst/common/hooks.ts
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@ import {
import Zenko from '../world/Zenko';
import { CacheHelper, Identity } from 'cli-testing';
import { prepareQuotaScenarios, teardownQuotaScenarios } from 'steps/quotas/quotas';
import { prepareUtilizationScenarios } from 'steps/utilization/utilizationAPI';
import { cleanS3Bucket } from './common';
import { cleanAzureContainer, cleanZenkoLocation } from 'steps/azureArchive';
import { displayDebuggingInformation, preparePRA } from 'steps/pra';
Expand Down Expand Up @@ -42,6 +43,10 @@ Before({ tags: '@Quotas', timeout: 1200000 }, async function (scenarioOptions) {
await prepareQuotaScenarios(this as Zenko, scenarioOptions);
});

Before({ tags: '@UtilizationAPI', timeout: 1200000 }, async function (scenarioOptions) {
await prepareUtilizationScenarios(this as Zenko, scenarioOptions);
});

After(async function (this: Zenko, results) {
// Reset any configuration set on the endpoint (ssl, port)
CacheHelper.parameters.ssl = this.parameters.ssl;
Expand Down
106 changes: 106 additions & 0 deletions tests/ctst/common/utils.ts
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,12 @@ import {
} from '@aws-sdk/client-iam';
import { AWSCliOptions } from 'cli-testing';
import Zenko from 'world/Zenko';
import fs from 'fs';
import lockFile from 'proper-lockfile';
import { ITestCaseHookParameter } from '@cucumber/cucumber';
import { AWSCredentials, Constants, Utils } from 'cli-testing';
import { createBucketWithConfiguration, putObject } from '../steps/utils/utils';
import { createJobAndWaitForCompletion } from '../steps/utils/kubernetes';

/**
* This helper will dynamically extract a property from a CLI result
Expand Down Expand Up @@ -297,3 +303,103 @@ export async function cleanupAccount(world: Zenko, accountName: string) {
});
}
}

interface PrepareScenarioOptions {
versioning?: string;
jobNamespace?: string;
jobName?: string;
}

/**
* Generic function to prepare scenarios that need cronjob to run (e.g., count-items)
* Can be used by both quota and utilization tests to avoid code duplication
* Creates accounts, buckets and runs cronjob once for all scenarios
*/
export async function prepareMetricsScenarios(
world: Zenko,
scenarioConfiguration: ITestCaseHookParameter,
options: PrepareScenarioOptions = {},
): Promise<void> {
const { gherkinDocument, pickle } = scenarioConfiguration;
const featureName = gherkinDocument.feature?.name?.replace(/ /g, '-').toLowerCase() || 'metrics';
const filePath = `/tmp/${featureName}`;
let initiated = false;
let releaseLock: (() => Promise<void>) | false = false;
const output: Record<string, AWSCredentials> = {};

const {
versioning = '',
jobName = 'end2end-ops-count-items',
jobNamespace = `${featureName}-setup`
} = options;

if (!fs.existsSync(filePath)) {
fs.writeFileSync(filePath, JSON.stringify({
ready: false,
}));
} else {
initiated = true;
}

if (!initiated) {
try {
releaseLock = await lockFile.lock(filePath, { stale: Constants.DEFAULT_TIMEOUT / 2 });
} catch (err) {
world.logger.error('Unable to acquire lock', { err });
releaseLock = false;
}
}

if (releaseLock) {
const scenarioIds = new Set<string>();

for (const scenario of gherkinDocument.feature?.children || []) {
for (const example of scenario.scenario?.examples || []) {
for (const values of example.tableBody || []) {
const scenarioWithExampleID = hashStringAndKeepFirst20Characters(`${values.id}`);
scenarioIds.add(scenarioWithExampleID);
}
}
}

for (const scenarioId of scenarioIds) {
await world.createAccount(scenarioId, true);
await createBucketWithConfiguration(world, scenarioId, versioning);
await putObject(world);
output[scenarioId] = Identity.getCurrentCredentials()!;
}

await createJobAndWaitForCompletion(world, jobName, jobNamespace);

await Utils.sleep(2000);
fs.writeFileSync(filePath, JSON.stringify({
ready: true,
...output,
}));

await releaseLock();
} else {
while (!fs.existsSync(filePath)) {
await Utils.sleep(100);
}

let configuration: { ready: boolean } = JSON.parse(fs.readFileSync(filePath, 'utf8')) as { ready: boolean };
while (!configuration.ready) {
await Utils.sleep(100);
configuration = JSON.parse(fs.readFileSync(filePath, 'utf8')) as { ready: boolean };
}
}

const configuration: typeof output = JSON.parse(fs.readFileSync(filePath, 'utf8')) as typeof output;
const key = hashStringAndKeepFirst20Characters(`${pickle.astNodeIds[1]}`);
world.logger.debug('Scenario key', { key, from: `${pickle.astNodeIds[1]}`, configuration });

world.addToSaved('bucketName', key);
world.addToSaved('accountName', key);
world.addToSaved('accountNameForScenario', key);
world.addToSaved('metricsEnvironmentSetup', true);

if (configuration[key]) {
Identity.addIdentity(IdentityEnum.ACCOUNT, key, configuration[key], undefined, true, true);
}
}
24 changes: 12 additions & 12 deletions tests/ctst/features/cloudserverAuth.feature
Original file line number Diff line number Diff line change
Expand Up @@ -5,9 +5,9 @@ Feature: AWS S3 Bucket operations
@Cloudserver-Auth
Scenario: Check Authentication on bucket object lock actions with Vault
Given a IAM_USER type
And an IAM policy attached to the entity "user" with "Allow" effect to perform "CreateBucket" on "*"
And an IAM policy attached to the entity "user" with "<allow>" effect to perform "PutBucketObjectLockConfiguration" on "*"
And an IAM policy attached to the entity "user" with "<allow>" effect to perform "PutBucketVersioning" on "*"
And an IAM policy attached to the entity "user" with "Allow" effect to perform "s3" "CreateBucket" on "arn:aws:s3:::*"
And an IAM policy attached to the entity "user" with "<allow>" effect to perform "s3" "PutBucketObjectLockConfiguration" on "arn:aws:s3:::*"
And an IAM policy attached to the entity "user" with "<allow>" effect to perform "s3" "PutBucketVersioning" on "arn:aws:s3:::*"
When the user tries to perform CreateBucket
Then it "<should>" pass Vault authentication

Expand All @@ -24,9 +24,9 @@ Feature: AWS S3 Bucket operations
Scenario: Check Authentication on bucket retention actions with Vault
Given an existing bucket "" "without" versioning, "with" ObjectLock "GOVERNANCE" retention mode
And a IAM_USER type
And an IAM policy attached to the entity "user" with "Allow" effect to perform "PutObject" on "*"
And an IAM policy attached to the entity "user" with "Allow" effect to perform "PutObjectRetention" on "*"
And an IAM policy attached to the entity "user" with "<allow>" effect to perform "BypassGovernanceRetention" on "*"
And an IAM policy attached to the entity "user" with "Allow" effect to perform "s3" "PutObject" on "arn:aws:s3:::*"
And an IAM policy attached to the entity "user" with "Allow" effect to perform "s3" "PutObjectRetention" on "arn:aws:s3:::*"
And an IAM policy attached to the entity "user" with "<allow>" effect to perform "s3" "BypassGovernanceRetention" on "arn:aws:s3:::*"
And an object "" that "exists"
When the user tries to perform PutObjectRetention "<withBypass>" bypass
Then it "<should>" pass Vault authentication
Expand All @@ -44,15 +44,15 @@ Feature: AWS S3 Bucket operations
Scenario: Check Authentication on DeleteObjects with Vault
Given an existing bucket "<bucketName>" "without" versioning, "without" ObjectLock "without" retention mode
And a IAM_USER type
And an IAM policy attached to the entity "user" with "Allow" effect to perform "PutObject" on "*"
And an IAM policy attached to the entity "user" with "Allow" effect to perform "DeleteObject" on "<resource1>"
And an IAM policy attached to the entity "user" with "<allow>" effect to perform "DeleteObject" on "<resource2>"
And an IAM policy attached to the entity "user" with "Allow" effect to perform "s3" "PutObject" on "arn:aws:s3:::*"
And an IAM policy attached to the entity "user" with "Allow" effect to perform "s3" "DeleteObject" on "<resource1>"
And an IAM policy attached to the entity "user" with "<allow>" effect to perform "s3" "DeleteObject" on "<resource2>"
And an object "<objName1>" that "exists"
And an object "<objName2>" that "exists"
When the user tries to perform DeleteObjects
Then it "<should>" pass Vault authentication

Examples:
| bucketName | objName1 | objName2 | resource1 | resource2 | allow | should |
| ca-do-bucket-1 | obj1 | obj2 | ca-do-bucket-1/obj1 | ca-do-bucket-1/obj2 | Allow | should |
| ca-do-bucket-2 | obj1 | obj2 | ca-do-bucket-2/obj1 | ca-do-bucket-2/obj2 | Deny | should not |
| bucketName | objName1 | objName2 | resource1 | resource2 | allow | should |
| ca-do-bucket-1 | obj1 | obj2 | arn:aws:s3:::ca-do-bucket-1/obj1 | arn:aws:s3:::ca-do-bucket-1/obj2 | Allow | should |
| ca-do-bucket-2 | obj1 | obj2 | arn:aws:s3:::ca-do-bucket-2/obj1 | arn:aws:s3:::ca-do-bucket-2/obj2 | Deny | should not |
Loading
Loading