Delayed Evaluation

I want to uninstall the Windows 11 builtin Teams App during onboarding.
Because the computer starts out as Windows 10, the evaluation process correctly identifies that the Windows 11 builtin Teams App is not installed. But then the step to upgrade to Windows 11 installs it, and because ImmyBot has already evaluated, it doesn’t know it’s been installed and so doesn’t uninstall it.

To resolve this we would need to be able to flag packages as “Force Re-evaluation After Install/Update” and then ImmyBot would delay evaluating subsequent packages in the action order until after that package had completed (if that package required any maintenance).

Interestingly enough, @DimitriRodis requested this back in March to deal with NVidia updates, and a similar fix could also work for re-evaluating available updates after a feature update. Here’s what @DimitriRodis proposed:

Release notes

You can now have maintenance items (software / tasks) defer their evaluation for compliance until the execution stage.

Problem to solve

Here is the scenario:
Someone has deployments for the following two things:
Dell Command Update (which updates drivers, including video)
nVidia Driver Installation/Update Task (which updates a video driver needed for rendering/CAD)

Without the “deferred evaulation” feature, here is what happens:

  1. Evaluation of all maintenance items - ordering, and compliance checks
  2. –Dell Command Update returns non-compliance because there are driver updates available.
  3. –nVidia Driver Installation/Update Task returns compliant.
  4. Execution stage begins.
  5. –Dell Command Update runs, and installs an inappropriate driver.
  6. Execution concludes.
  7. Next maintenance cycle: Evaluation of all maintenance items - ordering, and compliance checks
  8. –Dell Command Update returns compliant, all is up to date.
  9. –nVidia Driver Installation/Update Task returns non-compliant because of the driver Dell Command Update installed.
  10. Execution stage begins.
  11. –nVidia Driver Installation/Update Task installs the driver and completes.
  12. Execution concludes.
  13. Goto Step 1 (forever).

One seemingly obvious “fix” for this is to exclude video driver updates from Dell Command Update–which, although is possible, has been tried. Something else (possibly another driver, such as chipset drivers) can apparently cause this, because it’s happening to Jason.

Proposal

The proposal to “fix” this and other similar situations, while less than completely ideal, would work very simply by implementing a checkbox on all maintenance items (software/tasks) that defers evaluation of compliance until the maintenance item is reached in Immy’s execution stage. So, with this feature implemented, the above session would instead work as follows:

  1. Evaluation of all maintenance items - ordering
  2. Evaluation of all maintenance items that do not have deferred execution are performed.
  3. –Dell Command Update returns non-compliance because there are driver updates available.
  4. –nVidia Driver Installation/Update Task is present, but compliance check is deferred.
  5. Execution stage begins.
  6. –Dell Command Update runs, and installs an inappropriate driver.
  7. –nVidia Driver Installation/Update Task compliance check is run, returns non-compliance due to the inappropriate driver.
  8. –nVidia Driver Installation/Update Task installs the driver and completes.
  9. Execution concludes.

My thoughts:
Are we perhaps over-generalizing a solution to a problem that only happens in very specific scenarios? Would this problem be better solved by making Immy natively understand what an OS upgrade is instead of us shoehorning Feature updates into a Software package? We’re already having to do a lot of trickery to make this work. For example:

  1. Windows 11’s version is still 10.0.X, so I hacked our Software Inventory script to return a fake “Software” called “Windows 10” or “Windows 11” depending on whether the OS version in WMI is 10.0.22000 or higher. This allows for easier bulk reporting since these inventory scripts run daily, and eliminates the need for a custom detection script for Windows 10, but it is still…hacky.
  2. I create static versions for each build of Windows 10 with fake download links but then have a custom Download script that uses the Rufus Windows ISO download script to download the actual ISO based on the version. This isn’t a problem necessarily but it does mean that I have to manually add static versions each time a new Feature update becomes available. The real issue is that most people refer to feature updates by their short names of 22H1, 22H2 etc but these aren’t natively sortable like version numbers so the casual enthusiast will find themselves on Wikipedia’s Windows 10 page trying to figure out what the latest version of Windows is that ImmyBot supports. Sure I could add a version “alias” to make it easier for people but it seems like I’m yet again adding an entire feature for one specific edge case.

Moving forward we’re going to be releasing ImmyMDM which gives us access to new Windows Update APIs in Windows. This is going to force us to build out the concept of “OS Updates” in Immy anyway.

But regardless of how we apply the updates, I propose we create an additional “Platform Update” phase of the session that happens before everything else. The logic would work exactly as @James_Harper and @DimitriRodis suggest where both detection and execution of other items is deferred. The only problem here is what to show on emails to the end user. The naive case would be to only show the OS Upgrade and perhaps say “This platform upgrade must be performed before further updates can be evaluated” or something to that effect.

While on that line of thought, I also want to propose the concept of action criticality. I won’t go into detail here but the idea being that you could run less invasive updates on a more frequent schedule than our default of one week.

Anyway, my solution doesn’t fully solve @DimitriRodis 's Dell vs AutoDesk video driver problem. For this I’d also suggest we extend the concept of Software to cover Drivers. We’d start by inventorying installed drivers as well as the hardware they may be bound to. I’d likely start by only inventorying Video and Ethernet drivers/hardware. Then add a “DriverUpdate” actionType that necessitates the specification of the inventoried VEN/DEV IDs or VID/PID for USB devices. This way during detection we can prevent the addition of multiple driver update actions for the same piece of hardware.

This would build on top of a feature we have actively in development that allows for the creation of child actions within the Metascript engine. The concept is to allow Dell Command Update/Lenovo System Update/HP Image Assistant tasks to create individual actions on the session for each of the items they intend to update via a new cmdlet Add-ChildAction or something to that effect.

Now of course the real kicker is resolving the hardware id from the perspective of a generic NVidia Driver installer that contains hundreds of drivers, or perhaps one universal driver that covers thousands of pieces of hardware. This could likely be cumbersome to resolve from the perspective of Dell Command/HP etc but my hopes are that there is enough information in the logs to be able to figure out why it decided an update was available in the first place. Perhaps an easier way to go about this would be to instead look at it from the perspective of the types of hardware we support, Video and Ethernet. We could then implement the concept of Dell Command being a “Driver Provider” capable of updating Video and Ethernet drivers, while NVidia is a “Driver Provider” capable of updating Video drivers. Further, we could scrape together some code I wrote a long time ago for fetching the latest “Autodesk Compatible” drivers and indicate that it is also a “Driver Provider” capable of providing Video drivers.

In this way we could set a policy that says “Computers used by members of the Engineering team should have their video drivers come from the Autodesk Driver provider”

My idea for delayed evaluation was kind of the opposite. Instead of saying “delay the evaluation of this item”, the checkbox would mean “delay the evaluation of everything that comes after this item”. The “platform updates” idea would probably work though.

One other scenario i have come across is updating firmware on HP Dock’s. The firmware package includes WMI libraries that allow querying of the version number. Without that, all I can do is tell if a dock is connected or not. What I do in this case is:
. no dock connected? return version 99.99.99.99 in the version detection script
. is dock connected by we don’t have WMI library? return version 0.0.0.0 in the version detection script
. build the “do we actually need an upgrade” logic into the install script
. (added complexity that installing a hardware upgrade may disconnect us from the network for a bit)

Plugins are another scenario. I think we have it sorted with Chrome and Edge plugins because we just query the registry, but there could easily be other applications that have dependencies on an app vs plugin that could only be solved via delayed evaluation

Regarding your comments about Windows 11 just being a layer of paint on Windows 10, I feel that. I further butchered your WindowsUrl function to make it do what I wanted. It’s even worse than that now as there is 22H2v1, which is 22621.675 (22H2 was 22621.525 I think). They might have done the same with Windows 10 too.

+1 on this! I have lots of tasks that run dependent on software being installed that requires onboarding to have to run twice to get everything installed. If we could set those types of tasks to evaluate right before they run, that would be great for us.

I do understand dependencies in software deployments, but it does not work in the case of Global Software deployments where we cannot edit the dependencies.

I’m a stickler for desktop icons automatically showing up on my desktop every week when software is updated. I wrote a task to clean up public desktop icons created by the SYSTEM account within the last hour and would like to run that at the end of our deployments, but it would require this feature to be able to run the evaluation after all the other tasks have run.

No, it wouldn’t. Just make the task/script only implement “set” (enforcement only) and since there is no evaluation, it will simply execute every time without prior (or subsequent) verification of compliance.

Adding one more use case to this feature request. I wrote a script to detect if the Adobe PDF plugins would load in Outlook and gave it selectable parameters to choose it Immy was to make sure it was either Enabled or Disabled. When it runs at detection, it would see it’s disabled (when defined as disabled in a previous session) and set it to do nothing for the session. A higher up task in the stack would update Adobe products and the plugin would get re-enabled but not disabled at execution time due to the detection showing compliant before the update ran. I have just set the task to use the script only as a Set script, but I lose the ability to detect and report which workstations are compliant in the Preview. The same thing happens for a WebEx deployment I use to disable it from Automatically starting up. Every week it updates and gets re-enabled and it pops up till next week when it get detected during the next maintenance.