-
Notifications
You must be signed in to change notification settings - Fork 43
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Proposal] Ingestion Spec controller #3
Comments
@AdheipSingh this would be a massive step forward and probably one of the most fundamental improvements in the operator. Configuration of kafka sources is fundamental to control of data plane using helm charts. I'd be happy to help with testing. |
@styk-tv I will be sending a draft PR. Your reviews and testing would be really helpful. |
@AdheipSingh this is great news, thank you for your fast reply. waiting for the PR whenever you're ready, please ping here |
Regarding the field: apiVersion: "druid.apache.org/v1alpha1"
kind: "DruidIngestion"
metadata:
name: sample-druid-spec
namespace: mydruid
spec:
clusterRef: mydruid
suspend: false
supervisorSpecRef:
kind: ConfigMap
name: mysupervisor
namespace: druid and hold an object like this: type CrossNamespaceSourceReference struct {
// API version of the referent.
// +optional
APIVersion string `json:"apiVersion,omitempty"`
// Kind of the referent.
// +required
Kind string `json:"kind"`
// Name of the referent.
// +required
Name string `json:"name"`
// Namespace of the referent, defaults to the namespace of the Kubernetes resource object that contains the reference.
// +optional
Namespace string `json:"namespace,omitempty"`
} It makes the YAMLs much cleaner. |
|
Your points are valid. Regarding the users, Adding another configmap.yaml next to their Druid.yaml - I think they will prefer that instead of this big Spec. Maybe worth asking them. |
@AdheipSingh We also need a secure solution for providing authentication information. "tuningConfig" : {
"type": "hadoop",
"jobProperties": {
"fs.s3a.impl" : "org.apache.hadoop.fs.s3a.S3AFileSystem",
"fs.AbstractFileSystem.s3a.impl" : "org.apache.hadoop.fs.s3a.S3A",
"mapreduce.job.classloader": "true",
"fs.s3a.aws.credentials.provider": "org.apache.hadoop.fs.s3a.TemporaryAWSCredentialsProvider",
"fs.s3a.access.key": "${AWS_ACCESS_KEY}",
"fs.s3a.secret.key": "${AWS_SECRET_KEY}",
"fs.s3a.session.token": "${AWS_SESSION_TOKEN}"
}
} |
The text was updated successfully, but these errors were encountered: