-
Notifications
You must be signed in to change notification settings - Fork 302
Initial Autoscaler Implementation #15650
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
4453845 to
23d9f68
Compare
23d9f68 to
49b8546
Compare
0dd86b1 to
c7a1e38
Compare
c7a1e38 to
13729a9
Compare
| async loadAutoscalerConfigMap() { | ||
| const url = `/k8s/clusters/${ this.mgmtClusterId }/v1/${ CONFIG_MAP }/${ AUTOSCALER_CONFIG_MAP_ID }`; | ||
|
|
||
| return await this.$dispatch('cluster/request', { url }, { root: true }); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We do this instead of using the conventional cluster/find because we need to fetch the configmap from within a management product which means the cluster or explorer has not been loaded.
Loading the cluster seemed far too hackish but it also means that this configMap will not be cached or receive live updates wherever it's used. In those cases we use polling instead.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Long term I think we do need to be able to use loadCluster or something similar in the cluster management context, but that's a can of worms. I think this is a good approach for this feature. loadCluster would be loading quite a bit of extra data each time a user hovers over a new row of the autoscaler column.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I agree. Ideally we should make the stores lightweight enough to be able to query resources in any context.
| const params = stevePaginationUtils.createParamsForPagination({ schema: eventSchema, opt: { pagination } }); | ||
| const url = `/k8s/clusters/${ this.mgmtClusterId }/v1/${ EVENT }?${ params }`; | ||
|
|
||
| const events = (await this.$dispatch('cluster/request', { url }, { root: true }))?.data || []; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See loadAutoscalerConfigMap comments for details on why we're using cluster/request.
I will also note that I tested this with ui-sql-cache both enabled and disabled. I thought it had to be enabled for the filter fields to work but that doesn't appear to be the case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I factored this out into PopoverCard.vue so that it could be reused for the Autoscaler formatter.
mantis-toboggan-md
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When a cluster with autoscaling is first provisioned, the autoscaling popover shows it's already been scaled up and down. Not sure we can address that from the UI side given what I've seen of the autoscaler api -- looks like the UI is just reporting what the configmap says in this case -- but figured I'd call it out anyway.

I'm not sure these buttons make sense at all with autoscaler enabled. The progress bar might need some adjustment too. Since we're disabling the field they correspond to in the cluster creation page, maybe they should be removed from this view entirely?
There are console errors when looking at the autoscaler popover or cluster detail page if the current user doesn't have access to the autoscaler configmap, but I know the plan around access to that configmap is still uncertain.
3dd5a54 to
7c0e4c6
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM I just had one question about validation. I was able to create a one-pool cluster with
count:1
min:1
max:0
The cluster provisioned fine, with one machine in its one pool, but the autoscaler doesn't seem to initialize. The autoscaler configmap reports that its "initializing" and the autoscaler pod is full of errors about an invalid max count annotation.
It sounds like max count needs a bit more validation. I noticed we're relying on backend validation for this feature (seems fine given that all of our machine configs are validated once the user clicks create anyway) -- should I file a r/r issue about this?
|
I'll attempt to do front end validation and if there's problems I'll open the r/r issue. |
|
I'm going to put the validation in a separate pr which includes another change requested by the backend. |


Summary
Fixes #14201
Note: I can provide a figma and an instance with the backend implementation.
Occurred changes and/or fixed issues
This adds the following abilities:
Technical notes summary
Areas or cases that should be tested
cluster-autoscalingfeature flagNotes:
The memory and replica portions are the important bits
Areas which could experience regressions
Screenshot/Video
Checklist
Admin,Standard UserandUser Base