We just had an issue where Chrome wouldn’t deploy because the customer network blocks user access to https://dl.google.com.
Dynamic versioning is great, but it introduces this issue where we need to whitelist every single download site. It would be good if ImmyBot cloud server could cache (or at least proxy) the download, so that we only need to whitelist one site. The deployment framework I wrote for Automate would always download to our Automate server first then to the customer endpoint, and it worked well.
I think I could spin my own via a custom download script and a server somewhere (download first to server, then download from server to endpoint), but it would be awesomer if ImmyBot could provide this directly.
I think being able to host a cache download server would prove extermely useful in many circumstances. The issue would be keeping it clean.
For example, Autodesk installers are massive. Trying to download this to 20 PCs just isnt reasonable at the same time. Serialisation exists, but that will take time too.
It would be great to be able to nominate a device type as “Cache Server” and specify a cache location (ie C:\ImmyCache) where large installers can be configured to download to, and specified subnets can be configured to pull from cache server.
Then its just a matter of making sure that the computer AD object has read access to the location. If access fails (or youre not on a valid subnet for the cache server), then download from the internet.
Yeah i’d considered that. Having a short (eg 30 days) lifespan in the cache would keep it clean.
If we could create a global override for the download script then we could do whatever we wanted
Alternatively, if you are operating strict blocklist/allowlists to that great extent, would it not make sense to use pattern matching to skim the scripts you’re going to deploy and adding them to the list? Personally, I think blocking https://dl.google.com is a bit extreme compared to the norm and is unlikely to be experienced by most people.
Or perhaps alternatively, maybe the ask for Immy is to be able to generate an allowlist based on which deployments you want to run?
I just got bitten by this again. Customer wants to re-purpose a laptop, so did the systemreset and kept the provisioning package, and ImmyBot picked it up as a fresh PC and that all worked perfectly, but then when it came time to download acrobat, the firewall said no. Most downloads are https and this client doesn’t do https inspection so almost everything is fine, but the Acrobat dynamic version script resolves to a http download so the firewall was easily able to block it. Adding multiple exceptions for multiple customers with multiple different firewalls sounds like a lot of work…
So this isn’t really about bandwidth savings, it’s about managing the download restrictions
The technical side of this should be easy enough to solve:
- ImmyBot maintains a per-MSP (or global?) cloud cache
- Files are identified by full url, so acrobat.exe downloaded from adobe.com is different to acrobat.exe downloaded from fakeadobe.com (even if someone somehow engineered a hash collision and the hashes matched exactly)
- Download goes like:
– Script says I want acrobat.exe from adobe.com
– ImmyBot checks the cache and downloads the file to the cache if required
– File is downloaded to the endpoint from the cache
- ImmyBot cleans up any file that hasn’t been used in (say) 30 days
- Some sort of ability to clean corrupt files from the cache
- Bonus points for dynamic version caching too, any dynamic version script result gets cached for (say) 1 hour
The tricky bit might be the T&C/EULA’s some vendors have, where you’re allowed to download the installer for internal use, but not allowed to redistribute it. I would argue that a cache doesn’t cound as “data at rest” and so it’s not redistribution, but i’m a tech not a lawyer
https://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html is down right now and it’s breaking my installs… if only there was some kind of cache that could keep things ticking along during outages
Seeing as how most of the software utilizes BITS, you could probably setup a branchcache server for this. When I was still working with MDT, I found that 2PINT Software has some free branchcache tools to assist in pre-populating the cache. I’d be interested to see how much it improves performance
1 Like
We’ve just been bitten by download issues again. For WebView2, ImmyBot scans WebView2 - Microsoft Edge Developer to find the latest version, and that page was giving me a 500 error. It’s working again now from my computer, but ImmyBot is still getting a 500 error from where it is running in the cloud. The customer is complaining that ImmyBot is unreliable because they keep getting bitten by these sorts of issues.
If I could just cache the output of version detection then maintenance tasks on existing endpoints would probably work because they are probably already running the latest version, as long as the logic was “i can’t figure out the latest version because the server is down so we’ll assume the latest version hasn’t changed”.
What if we could have a custom “wrapper” around the version detection script, so version detection calls my wrapper instead of calling the actual version detection script. For my purposes, my custom wrapper script would:
- Check if the cached version data is old (expiry = something reasonable, maybe 8 hours)
- If it’s old then call out to the actual version detection script and refresh
if the version detection failed then keep using the stale data and bump the expiry up by (eg) 1 hour
- Return the cached or updated version data, but with the URL’s swapped out to point to my caching server
Obfuscate the URLs or even make them random and valid for only a short time and have the caching server figure things out
- Get the caching server to download so it’s ready for delivery to endpoints, or just download the first time it’s requested
1 Like