$ argo get -n argo @latest
Name: hello-world-kqgzf
Namespace: argo
ServiceAccount: unset (will run with the default ServiceAccount)
Status: Succeeded
Conditions:
PodRunning False
Completed True
Created: Tue Aug 06 12:10:15 +0200 (2 minutes ago)
Started: Tue Aug 06 12:10:15 +0200 (2 minutes ago)
Finished: Tue Aug 06 12:10:35 +0200 (2 minutes ago)
Duration: 20 seconds
Progress: 1/1
ResourcesDuration: 0s*(1 cpu),2s*(100Mi memory)
STEP TEMPLATE PODNAME DURATION MESSAGE
✔ hello-world-kqgzf hello-world hello-world-kqgzf 10s
This is one of the manifests attached also to the release page. It would be nice to understand what is the difference between them (namespace?) Would the other manifests work?
Working through https://github.com/cms-dpoa/cloud-processing/tree/main/standard-gke-cluster-gcs#readme
Starting from a new project created with
Update the title: "with an NFS disk" -> "with a Google Cloud Storage (GCS) bucket" #42
Mention that a billing account needs to be enabled for the GCP project
List the APIs that need to be enabled for the GCP project for the bucket access and GKE and instruct how to enable them
gcloud services enable container.googleapis.com...Mention that the bucket location should be the same as the project region
For the cluster creation, mention that PROJECT_ID can be found, for example, in the output of
gcloud projects listInstruct how to start
argoservices (theargonamespace exists but it is empty)using the version number from
argo version(after the cli installation), triedkubectl apply -n argo -f https://github.com/argoproj/argo-workflows/releases/download/<VERSION>/install.yamlbut argo hello-world does not work with this because:
using
kubectl apply -n argo -f https://raw.githubusercontent.com/argoproj/argo-workflows/master/manifests/quick-start-postgres.yaml(inherited from the old instructions), argo hello-world goes OK.This is one of the manifests attached also to the release page. It would be nice to understand what is the difference between them (namespace?) Would the other manifests work?
Match the job number in the start job to the node number of the cluster (now 6 vs 3, they should be equal)
Add how to find the files from the bucket