It’s not that funny.
Docker is like a virtual machine, but you only run one specific program in it. About exactly what the meme describes.
It’s not that funny.
Docker is like a virtual machine, but you only run one specific program in it. About exactly what the meme describes.
That’s fair enough. The common misconception is that waterfall is great for space missions, when in reality NASA is doing agile.
I agree that not everybody is NASA, so what works for them doesn’t necessarily work for everyone.
NASA also successfully flew a helicopter on Mars first try.
It’s barely waterfall planning either. Often there’s no planning, at least no coordinated one.
Currently at my current workplace we lack coordinated planning between teams. It seems like everybody is working in their own directions and it can take months until we get feedback from other teams. Mostly a product management problem.
The author is also hyping up waterfall too much. Agile was created because waterfall has its shortcomings (e.g. the team realizes too late that what they’re building isn’t what the customer wants).
But I also think it also represents how poorly implemented these ideas are. People say they do agile/kanban/scrum, but in reality they do some freak version of these.
I think this is a bit disingenuous. There’s no customer interaction in these panels.
So waterfall would be:
Customer says they want to go to Mars.
You spend years building a rocket capable of going to Mars, draining all the company budget in the process.
Customer then clarifies they actually meant they wanted to go to Mars, Pennsylvania, USA - not the planet!
It has been a pleasure having this internet argument with you. I learned a bit, and you learned a bit. It’s a win win :)
My implementation: https://pastebin.com/3PskMZqz
Results at bottom of file.
I’m taking into account that when I update a hash, all the hashes to the right of it should also be updated.
Number of hashes is about 2.71828 x n! as predicted. The time seems to be proportional to n! as well (n = 12 is about 12 times slower than n = 11, which in turn is about 11 times slower than n = 10).
Interestingly this program turned out to be a fun and inefficient way of calculating the digits of e.
Only thing I can find is that it has 128-bit graphics-oriented floating-point unit delivering 1.4 GFLOPS.
Probably only for marketing reasons. Everyone was desperate not to be worse than N64.
It’s a poorly worded article. YouTube premium “limits ads” as in being completely ad free (besides in-video sponsorships). YouTube hasn’t gone down that route yet.
I don’t think it’s an unpopular opinion, but I’m not sure how YouTube can deal with it best. There’s sponsor block, but it’s relying on crowdsourced data.
Probably not in consumer grade products in any foreseeable future.
More complexity with barely any (practical) benefits for consumers.
Where are you getting that from? YouTube premium is ad free (so far).
Not true 128 bit. It has 128 bit SIMD capabilities, but that’s about it. Probably mostly because of marketing reasons to show how much better it is than N64 (which also is “64 bit” for marketing reasons).
In that case, we’re having 512 bit computers now: https://en.wikipedia.org/wiki/AVX-512
So in your code you do the following for each permutation:
for (int i = 0; i<n;i++) {
You’re iterating through the entire list for each permutation, which yields an O(n x n!) time complexity. My idea was an attempt to avoid that extra factor n.
I’m not sure how std implements permutations, but the way I want them is:
1 2 3 4 5
1 2 3 5 4
1 2 4 3 5
1 2 4 5 3
1 2 5 3 4
1 2 5 4 3
1 3 2 4 5
etc.
Note that the last 2 numbers change every iteration, third last number change every 2 iterations, fourth last iteration change every 2 x 3 iterations. The first number in this example change every 2 x 3 x 4 iterations.
This gives us an idea how often we need to calculate how often each hash need to be updated. We don’t need to calculate the hash for 1 2 3 between the first and second iteration for example.
The first hash will be updated 5 times. Second hash 5 x 4 times. Third 5 x 4 x 3 times. Fourth 5 x 4 x 3 x 2 times. Fifth 5 x 4 x 3 x 2 x 1 times.
So the time complexity should be the number of times we need to calculate the hash function, which is O(n + n (n - 1) + n (n - 1) (n - 2) + … + n!) = O(n!) times.
EDIT: on a second afterthought, I’m not sure this is a legal simplification. It might be the case that it’s actually O(n x n!), as there are n growing number of terms. But in that case shouldn’t all permutation algorithms be O(n x n!)?
EDIT 2: found this link https://stackoverflow.com/a/39126141
The time complexity can be simplified as O(2.71828 x n!), which makes it O(n!), so it’s a legal simplification! (Although I thought wrong, but I arrived to the correct conclusion)
END EDIT.
We do the same for the second list (for each permission), which makes it O(n!^2).
Finally we do the hamming distance, but this is done between constant length hashes, so it’s going to be constant time O(1) in this context.
Maybe I can try my own implementation once I have access to a proper computer.
Time complexity is mostly useful in theoretical computer science. In practice it’s rare you need to accurately estimate time complexity. If it’s fast, then it’s fast. If it’s slow, then you should try to make it faster. Often it’s not about optimizing the time complexity to make the code faster.
All you really need to know is:
There are exceptions, so don’t always follow these rules blindly.
It’s hard to just “accidentally” write code that’s O(n!), so don’t worry about it too much.
Good effort of actually implementing it. I was pretty confident my solution is correct, but I’m not as confident anymore. I will think about it for a bit more.
By “certain distance function”, I mean a specific function that forces the problem to be O(n!^2).
But fear not! I have an idea of such function.
So the idea of such function is the hamming distance of a hash (like sha256). The hash is computed iterably by h[n] = hash(concat(h[n - 1], l[n]))
.
This ensures:
No idea of the practical use of such algorithm. Probably completely useless.
It’s a marketing trick. First suggest an insanely high price. Customer rejects. Then suggest a lower price, but still expensive. The customer will be more inclined to buy, because the new lower price feels like a good deal in relation to the incredibly expensive old price.
If they went with the lower price right away, the customer wouldn’t be as inclined to buy because they don’t have the incredibly insane price as a reference point.