WoWInterface (
-   Legion Beta archived threads (
-   -   Addon settings take excessive amount of time/syscalls (

Infus 11-27-16 11:14 AM

Addon settings take excessive amount of time/syscalls
TLDR: WoW uses ~600.000 WriteFile calls to write a 3MB settings file and that takes 9s. That is a order of magnitude slower than it needs to be.

I took a look at how World of Warcraft saves addon settings with Microsoft's Process Monitor:

I've uploaded screenshots of a typical run to: What you see there is every call that matches a filter on WeakAuras.lua. So, this is for only one file, though the file is 3.5 MB big.

As you can see in the bottom left 619.611 events match the filter, almost all of them calls to WriteFile.

By comparing the time of day between the initial CreateFile and the CloseFile, you can see that writing the file took 9s.

In 6. column, the details of each WriteFile call show that each WriteFile call is passed between 2 and ~15 bytes. On average each call is only passed 6 bytes.

This is obviously bad, saving a 3MB settings file shouldn't take longer than 1 second, nor take 600.000 calls to WriteFile.

Also we do get frequent reports of users loosing their WeakAuras settings if WoW crashes, we think this can only happen if WoW is at that moment writing the settings file.

SDPhantom 11-27-16 05:33 PM

There's more going that ProcessMonitor cannot show that's making all those write operations. What you're looking at is a serializer that takes a Lua object and converts it back into Lua-readable code that can be loaded back in as-is. I can only guess that it's writing per line as the serializer iterates through the settings table and writes out the resulting data. Without seeing what is written for each and every write call, it's hard to be sure. What I am positive about is that the 9 seconds it takes to write the file isn't solely from the write calls, but the serializer iterating through the settings table and using string manipulation to format the output between the calls.

Data loss for changes in the current session can happen before this process even begins as WoW doesn't save addon settings in the middle of a session unless a UI reload is performed. Crashing during the write isn't likely as everything is stopped when this takes place.

There are definitely better methods of saving data, but I think Blizzard opted for this approach for the ease of performing a Lua dofile() call on load.

Infus 11-28-16 06:50 AM

Sure there's more going on. But 600k calls to WriteFile will take the majority of that time. That's 60k calls per second, or 60 calls per ms. I'm pretty sure from my experience as a software developer, that those calls do take the majority of those 9s.

As to the data loss, we get frequent reports that users loose their existing settings. As in, they didn't even modify any settings in the session, but wow crashed and on next startup their setting were lost. The best explanation that I have for that, is a crash while saving the settings. Though, you are correct that if the saving happens on the main thread (which looks like it), that shouldn't be possible.

Infus 11-28-16 09:03 AM

I should at least mention that as far as we addon developers can tell, the backup mechanism only ever creates a .bak file if it doesn't exists. Which is pretty pointless.

sylvanaar 11-28-16 09:24 AM

I followed your reasoning until the last part then you lost me.

I have been on a few of these threads. They usually bring up good questions but fail in going to go figure out why it is how it is.

Most of wow's files a are larger than the one's you are observing.

If you truly wanted to speed up loading of those small files.

You would need to find areas that you could take to them and say hey, "this slows it sown during startup/shutdown", they have their own things to think about - in general, load time isn't and hasn't been one of them

Sure they could speed up loading by loading more things fully into ram and closing the file handles, then doing what they need to do with those things whether it is to compile and load them into the Lua environment

So what were they doing? Well - my guess with the non-overlapped IO and the small writes was trying to prevent as little data loss as possible while persisting the savedVariable

Your best bet is to go run it off a ramdisk and just take over responsibility for data consistency yourself. Perhaps in the process you will learn something that might really help the community, you sound like you have the skill.

I'll give you an example. At one point the community came together that if we removed embedded libraries from our addon, the file references that were left would try to load, then fail during startup. It turns out that having a missing file while loading can cut your loading time more than half. It is thought that addons and resources were loaded in a predefined relative order, and if there was an error loading one, then you would have to stop the dependencies and restart the load on everything affected.

One thing you may say is that we tend to use global profiles and app settings (Rather than per-character)

I suggested a novel way to get around this back then, but to do it right - you need a tool

Cheers, Good luck

SDPhantom 11-28-16 10:30 AM


Originally Posted by Infus (Post 321044)
Sure there's more going on. But 600k calls to WriteFile will take the majority of that time. That's 60k calls per second, or 60 calls per ms. I'm pretty sure from my experience as a software developer, that those calls do take the majority of those 9s.

The frequency a function is called has nothing to do with how much time it takes to perform the call. It's just common sense.

I've also had personal experience in writing quite a few serializers and know the steps they need to take in order to do their job. Believe me, it isn't a very easy task on the CPU.

Stanzilla 11-28-16 01:11 PM

Documenting the response we got:


[18:32:13] TheDanW: the WriteFile thing, i don't think it was ever expected to write that much data into savedvars
[18:32:58] Infus: well, I don't want to know how much auctioneer saves
[18:33:00] TheDanW: it uses unbuffered writes for simplicity, i'll add it to the list to take a look at
[18:33:51] Infus: TheDanW: and the broken .bak file saving? did that make your list too?
[18:34:03] TheDanW: haven't read that
[18:34:45] Infus: last coment on that, or read a few lines see just below my ping today
[18:35:21] Infus: well, I don't know what it should do, but currently it just doesn't do anything
[18:35:29] nevcairiel: TheDanW: the jist of it is that it only writes a .bak file when it doesnt exist yet, if it exists it never gets replaced with any newer data
[18:35:55] nevcairiel: so after the first run of the client you get a bak file, and then for years and years nothing :D
[18:39:08] TheDanW: hrm, not sure what changed
[18:39:16] nevcairiel: (that never worked)
[18:39:36] TheDanW: its never worked?
[18:39:47] nevcairiel: as long as I remember, it worked like it does now
[18:40:03] nevcairiel: it just doesnt work very well
[18:42:30] TheDanW: it looks like the intent of the code was to overwrite the .bak with the existing .lua, but yeah, it's never worked from what i can tell

SDPhantom 11-28-16 06:43 PM

I honestly can say that I'm not all that surprised that it's taking 620k write calls to output a 3.6 MB file. The reason is a serializer processes a table and outputs a stream as it does its work. It writes a beginning enclosure when it encounters a table, it writes for every key/value pair, it writes an ending enclosure when it exits a table, and so on. When you have a large table and/or lots of nesting, that's a lot of individual writes.

All times are GMT -6. The time now is 06:41 PM.

vBulletin © 2020, Jelsoft Enterprises Ltd
© 2004 - 2019 MMOUI