-
Notifications
You must be signed in to change notification settings - Fork 24
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Move longhaul to the same Azure subscription we use for E2E tests #167
Comments
Listing steps performed as a log of what was done and to serve as reference/documentation in case this needs to be redone. Intro
Setup environment and credentials# From https://github.com/dapr/test-infra/pull/203 and https://github.com/dapr/test-infra/blob/master/README.md
export SUBSCRIPTION_TO_BE_USED=INSERT_SUBSCRIPTION_UUID_HERE
export release_or_weekly='release' # use 'weekly' for weekly
export resourceGroup="aks-longhaul-${release_or_weekly}"
export DAPR_VERSION_TO_INSTALL='1.12.0'
export location=eastus
export clusterName=$resourceGroup
export MONITORING_NS=dapr-monitoring Login to OSS subs# First, loging on Dapr OSS subscription on your default browser
# Then, login on az CLI
az account clear && az login --output=none && az account set --subscription ${SUBSCRIPTION_TO_BE_USED} Create new subscriptionsaz group create --name ${resourceGroup} --location ${location} Deploy clustersaz deployment group create \
--resource-group ${resourceGroup} \
--template-file ./deploy/aks/main.bicep \
--parameters deploy/aks/parameters-longhaul-${release_or_weekly}.json Remove Dapr AKS extension# We want to manually control Dapr setup, so let's remove the Azure-controlled Dapr ext.
az k8s-extension delete --yes \
--resource-group ${resourceGroup} \
--cluster-name ${clusterName} \
--cluster-type managedClusters \
--name ${clusterName}-dapr-ext Get cluster credentialsaz aks get-credentials --admin --name ${clusterName} --resource-group ${resourceGroup} Install latest stable on both clusters through helm# Just for good measure...
dapr uninstall -k
# Now to the helm chart upgrade
helm repo update && \
helm upgrade --install dapr dapr/dapr \
--version=${DAPR_VERSION_TO_INSTALL} \
--namespace dapr-system \
--create-namespace \
--wait Bounce the apps (we just re-installed Dapr)for app in "feed-generator-app" "hashtag-actor-app" "hashtag-counter-app" "message-analyzer-app" "pubsub-workflow-app" "snapshot-app" "validation-worker-app" "workflow-gen-app"; do
kubectl rollout restart deploy/${app} -n longhaul-test || break
done Setup monitoring namespace (next steps require this)# From https://github.com/dapr/test-infra/blob/master/.github/workflows/dapr-longhaul-weekly.yml
kubectl get namespace | grep ${MONITORING_NS} || kubectl create namespace ${MONITORING_NS} Install Prometheus through helm chart# Following https://docs.dapr.io/operations/observability/metrics/prometheus/#setup-prometheus-on-kubernetes
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts && \
helm repo update && \
helm install dapr-prom prometheus-community/prometheus \
--namespace dapr-monitoring \
--create-namespace \
--wait Install Prometheus custom settingThis is being bypassed as we fixed dashboard code in dapr/dapr#7121. There is no need Install Grafana through helm chart # https://docs.dapr.io/operations/observability/metrics/grafana/#setup-on-kubernetes
helm repo add grafana https://grafana.github.io/helm-charts && \
helm repo update && \
helm upgrade --install grafana grafana/grafana \
--values ./grafana-config/values.yaml \
--namespace ${MONITORING_NS} \
--create-namespace \
--wait && \
kubectl get pods -n ${MONITORING_NS} Configure grafanaSteps here are basically just following the steps described on https://docs.dapr.io/operations/observability/metrics/grafana/#configure-prometheus-as-data-source Log in to grafanakubectl get secret --namespace dapr-monitoring grafana -o jsonpath={.data.admin-password} | base64 --decode | clip.exe
kubectl port-forward svc/grafana 8080:80 --namespace dapr-monitoring Register prometheus datasourceJust follow https://docs.dapr.io/operations/observability/metrics/grafana/#configure-prometheus-as-data-source Import dashboards (from #7121)Use the code from dapr/dapr#7121 or, if it is merged, from https://github.com/dapr/dapr/blob/master/grafana/ Remember: Create credentials for both clustersInitial checks:
Create service Principal
Role:
|
This commit updates the workflows to use the clusters configured in the OSS subscription. Those clusters were created as documented in dapr#167. Those clusters live in distinct subscription and have distinct names as the current ones -- hence the need to update the workflows. Additionally, because those clusters use Bicep-configure Azure-hosted state store, binding and pubsub components, there is no need to configure Redis, Kafka or setup individual components. Signed-off-by: Tiago Alves Macambira <[email protected]>
This commit updates the workflows to use the clusters configured in the OSS subscription. Those clusters were created as documented in dapr#167. Those clusters live in distinct subscription and have distinct names as the current ones -- hence the need to update the workflows. Additionally, because those clusters use Bicep-configure Azure-hosted state store, binding and pubsub components, there is no need to configure Redis, Kafka or setup individual components. Signed-off-by: Tiago Alves Macambira <[email protected]>
This commit updates the workflows to use the clusters configured in the OSS subscription. Those clusters were created as documented in #167. Those clusters live in distinct subscription and have distinct names as the current ones -- hence the need to update the workflows. Additionally, because those clusters use Bicep-configure Azure-hosted state store, binding and pubsub components, there is no need to configure Redis, Kafka or setup individual components. Signed-off-by: Tiago Alves Macambira <[email protected]>
Longhaul tests are still running in a subscription that is only accessible by Msft employees. Both, release and nightly environments, should be moved to the same Azure subscription we use for E2E tests.
Child of: #156
The text was updated successfully, but these errors were encountered: