Assuming you ignore or amortize the time necessary to create the table in the first place, of course.
This is the basis for rainbow tables: precomputed tables for mapping hashes to passwords, with a space-saving “hash chaining” trick to effect a constant factor reduction in table size. Such tables are the reason why passwords must be hashed with a unique salt when stored in a database.
I’m not sure this definition of Big-O for space complexity is universal. When I’ve seen/used it, the size of the initial data wasn’t relevant, it was more about the additional memory required for the algorithm.
For example, an in-place algorithm like Bubble Sort would have a O(1) space complexity, because it requires no extra memory (and 0 memory is a constant). Merge sort on the other hand is O(n) because it always uses additional memory for its intermediate stages, and that additional memory scales with n.
That’s not quite accurate. Big O notation and Theta notation are different ways of expressing the growth rates of functions - the choice of using Big O or Theta is independent of whether you’re trying to specify total space complexity or auxiliary space complexity.
Saying something is O(n) tells you it grows at most linearly, but this would also admit e.g. log n.
Saying something is Theta(n) tells you it grows exactly linearly: that is, it is not a slower growth rate like log n, nor a faster growth rate like n^2.
What if you allocate a huge chunk of memory and only use a small part of it? For example, checking if a list of numbers contains duplicates using a boolean array.
If your algorithm requires O(n) memory, any O(1) amount of memory can never be enough, no matter how huge. That's the entire point of O notation.
And if your implementation of an algorithm allocates more space in the big-oh sense than it can actually touch (eg. O(n) space for O(log n) time or whatever), that's just a wasteful implementation. Doesn't make the algorithm itself require more space than it has time to actually use.
Very. For instance if you look at sorting algorithms on wikipedia they pretty much all list performance (best, worst, average) but also worst-case space complexity, in O notation.
While taking a computational complexity course I wrote some python code that should have run in O(n) but was closer to O(n^2). After asking StackOverflow and some more experimentation it turns out the garbage collector turned it O(n^2) - turning it off manually yielded the correct O(n) runtime
My guess would be because it was implemented in c, where the usual practice if you need a container type other than a fixed size array is to implement it yourself. IME, c code tends to use linked lists for a lot of things that in most other languages would be implemented using a better suited, and more performent, container type from the standard library.
One way that other languages can outperform c, is it is easier to use the right data structure.
Blender's not even in C (the snippets are clearly C++), I wonder what the logic of having a sorted doubly linked list is: unless it's a skip list it's not like you can do bisection searches.
I guess a flat array could still be debatable if you're usually inserting near but not at the end, as it'll still need to move all the following elements. But it seems dubious.
I got the impression that the expected performance of the double linked list insertion is actually O(1), since in most cases the elements arrive in sorted order. It's been a time since my algorithm courses, but I think all the 'normal' trees have log(n) insertion in that case.
O(n) is generally indistinguishable from O(log n) so if there is any chance of different behavior than the expected optimum, go with the better algorithm.
Neat find! This proves to me a huge benefit of open source software: individual developers can satisfy their curiosity towards particular issues (with additional motivation from being personally affected) and improve the package for everyone once it's fixed.
Ah yes, the most reliable solution to a wrong choice of a data structure: add data-specific hacks until it works. For the life of me I can't figure out why people never consider replacing linked lists. Even without worst-case O(n) insertion, they usually behave worse than vectors, deques, or hives.
Perhaps a simple way to look at it is that it's like a dynamic array, but when the capacity is exceeded, you don't reallocate the array, just just allocate a new (exponentially larger) chunk and keep the old one as-is. Then just link the chunks in a (very short) linked list, or keep a separate small list of chunks (you're never gonna need more than 48, so you can just have a fixed allocation for it), or what have you. The bonus here is that it reduces latency on pushing and has more predictable performance.
There's also the choice of data structure that appears to force multiple copies of the same thing rather than thin references and ideally copy on modify.
"Now, it should be clear that most of the time (28s-53s on the time line) of importing that particular USD file is spent in id_sort_by_name".
Luckily there's a graph screenshot. But the graph it displays is incomprehensible. Without "id_sort_by_name" being on the one bar, I wouldn't even know what I'm looking at.
The timeline view in the upper screenshot is fairly straightforward: the region of 28-53s is selected, other parts of the timeline are grayed out, and the statistics for the selected time region down below show 94.4% of "Total CPU" being in id_sort_by_name.
The lower screenshot is a flame graph. If you haven't encountered one of those before, it's totally reasonable to not know how to read it. But it's a standard and common way to present this sort of data. See https://www.brendangregg.com/flamegraphs.html for more information.
If I understood the article correctly it caused problems when the same file was imported multiple times, or when another file with the same base name was imported.
More accidentally quadratic stories: https://www.tumblr.com/accidentallyquadratic
A classic. Every time I see that link I read the whole thing starting from the new beginning.
...oops
I opened the link and just started reading. I have a really dumb question that may expose common knowledge I don’t have, about this quote:
> The total amount of space needed to represent this collection of strings is O(k n^2).
I haven’t seen O-notation ever represent ram usage, just algorithm complexity. Is this common?
Yes, very common. You've seen "time complexity"; it's very common to talk about "space complexity" as well.
Fun bonus: they can be interchangeable, e.g. increasing space to reduce time.
And any operation that takes n bits as input can be trivially turned into an O(1) time and O(2^n) space algorithm through tabulation.
Assuming you ignore or amortize the time necessary to create the table in the first place, of course.
This is the basis for rainbow tables: precomputed tables for mapping hashes to passwords, with a space-saving “hash chaining” trick to effect a constant factor reduction in table size. Such tables are the reason why passwords must be hashed with a unique salt when stored in a database.
Yes, but total time is never going to be less than total space, when expressed in big-O notation
I’m not sure this definition of Big-O for space complexity is universal. When I’ve seen/used it, the size of the initial data wasn’t relevant, it was more about the additional memory required for the algorithm.
For example, an in-place algorithm like Bubble Sort would have a O(1) space complexity, because it requires no extra memory (and 0 memory is a constant). Merge sort on the other hand is O(n) because it always uses additional memory for its intermediate stages, and that additional memory scales with n.
Doing a quick google, the first few sites I find seem to use a similar understanding https://www.geeksforgeeks.org/time-and-space-complexity-anal...
The confusion is around space complexity vs auxiliary space complexity (extra space).
space complexity is O(n) but auxiliary space complexity uses Theta for notation instead.
But people aren't too picky on the notation and usually say something like "O(1) extra space" instead of using theta.
https://en.m.wikipedia.org/wiki/Space_complexity
That’s not quite accurate. Big O notation and Theta notation are different ways of expressing the growth rates of functions - the choice of using Big O or Theta is independent of whether you’re trying to specify total space complexity or auxiliary space complexity.
Saying something is O(n) tells you it grows at most linearly, but this would also admit e.g. log n.
Saying something is Theta(n) tells you it grows exactly linearly: that is, it is not a slower growth rate like log n, nor a faster growth rate like n^2.
Why couldn’t it?
Because it takes n time to access n units of memory.
Heavily simplified due to caches etc. To the point where people sometimes measure in cache misses instead as that is usually what actually matters.
What if you allocate a huge chunk of memory and only use a small part of it? For example, checking if a list of numbers contains duplicates using a boolean array.
If your algorithm requires O(n) memory, any O(1) amount of memory can never be enough, no matter how huge. That's the entire point of O notation.
And if your implementation of an algorithm allocates more space in the big-oh sense than it can actually touch (eg. O(n) space for O(log n) time or whatever), that's just a wasteful implementation. Doesn't make the algorithm itself require more space than it has time to actually use.
> Is this common?
Very. For instance if you look at sorting algorithms on wikipedia they pretty much all list performance (best, worst, average) but also worst-case space complexity, in O notation.
Is there a way to do that without signing in?
While taking a computational complexity course I wrote some python code that should have run in O(n) but was closer to O(n^2). After asking StackOverflow and some more experimentation it turns out the garbage collector turned it O(n^2) - turning it off manually yielded the correct O(n) runtime
Why was the solution not to get rid of the linked list and replace it with a red/black tree?
Or hash table, or any other kind of tree.
My guess would be because it was implemented in c, where the usual practice if you need a container type other than a fixed size array is to implement it yourself. IME, c code tends to use linked lists for a lot of things that in most other languages would be implemented using a better suited, and more performent, container type from the standard library.
One way that other languages can outperform c, is it is easier to use the right data structure.
Blender's not even in C (the snippets are clearly C++), I wonder what the logic of having a sorted doubly linked list is: unless it's a skip list it's not like you can do bisection searches.
I guess a flat array could still be debatable if you're usually inserting near but not at the end, as it'll still need to move all the following elements. But it seems dubious.
I got the impression that the expected performance of the double linked list insertion is actually O(1), since in most cases the elements arrive in sorted order. It's been a time since my algorithm courses, but I think all the 'normal' trees have log(n) insertion in that case.
O(n) is generally indistinguishable from O(log n) so if there is any chance of different behavior than the expected optimum, go with the better algorithm.
Neat find! This proves to me a huge benefit of open source software: individual developers can satisfy their curiosity towards particular issues (with additional motivation from being personally affected) and improve the package for everyone once it's fixed.
Except it wasn't improved in this case, if I'm reading the gist correctly. The author didn't like their fix enough to make a pull request.
Just like “this entire meeting could have been an email”, all this effort writing this gist could have been spent creating a PR.
Ah yes, the most reliable solution to a wrong choice of a data structure: add data-specific hacks until it works. For the life of me I can't figure out why people never consider replacing linked lists. Even without worst-case O(n) insertion, they usually behave worse than vectors, deques, or hives.
What is a hive in the context of data structures?
I meant something close to https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2021/p04...
Perhaps a simple way to look at it is that it's like a dynamic array, but when the capacity is exceeded, you don't reallocate the array, just just allocate a new (exponentially larger) chunk and keep the old one as-is. Then just link the chunks in a (very short) linked list, or keep a separate small list of chunks (you're never gonna need more than 48, so you can just have a fixed allocation for it), or what have you. The bonus here is that it reduces latency on pushing and has more predictable performance.
I assume it's an autocorrect of heap.
There's also the choice of data structure that appears to force multiple copies of the same thing rather than thin references and ideally copy on modify.
"Now, it should be clear that most of the time (28s-53s on the time line) of importing that particular USD file is spent in id_sort_by_name".
Luckily there's a graph screenshot. But the graph it displays is incomprehensible. Without "id_sort_by_name" being on the one bar, I wouldn't even know what I'm looking at.
The timeline view in the upper screenshot is fairly straightforward: the region of 28-53s is selected, other parts of the timeline are grayed out, and the statistics for the selected time region down below show 94.4% of "Total CPU" being in id_sort_by_name.
The lower screenshot is a flame graph. If you haven't encountered one of those before, it's totally reasonable to not know how to read it. But it's a standard and common way to present this sort of data. See https://www.brendangregg.com/flamegraphs.html for more information.
2 questions I would look into before anything else:
1. Does this have to be sorted?
2. Why is it sorted alphabetically instead of naturally?
Why not just change the “%s.%u” to “%s.%010u” and no code changes?
You just created a fix up to a large, but arbitrary, number just pushing the problem down the road instead of fixing it.
If I understood the article correctly it caused problems when the same file was imported multiple times, or when another file with the same base name was imported.
https://gist.github.com/bssrdf/397900607028bffd0f8d223a7acdc...
Yes, TFA says the issue is because "12345" sorts before "9888"
My solution avoids that.
“alphebatically” is used 3 times, and otherwise no typos that I noticed. So I wondered if it’s a term I’m not familiar with.