Why would you use anything other than Math.max?
Some of us have trust issues. Or worked with Java.
Which, now that I think about it, comes to the same thing.
Well, the question sort of implies that you’re needing to implementing
Math.max
yourself, for whatever reason. Probably as an exercise. It doesn’t make sense to reuse a library that implements the feature if you’re explicitly being asked how you would implement it yourself.This is why I think school and interviews are like a whole different universe from the one where actual work gets done.
In some ways they can be wholly different, but I don’t think this is a good example of it.
Any programmer who cannot implement “take two numbers, return the larger one” is clearly not very competent. Even though you’re never going to literally need to implement Math.max yourself at work, you are going to need basically the same types of skills. Probably 95% of the work I do day-to-day is stuff you’d learn in your first year at uni, and this just shows that you’ve got that ability.
In practice, the best interviews I’ve had usually set a slightly more complicated task as a do-in-your-own-time problem and then go through what you did in the actual interview. Problems like “read a list of names in the form , each name on a separate line, from a text file. Sort the names by last name, then by other names. Output to another text file. Include unit tests.” They wouldn’t then expect you to re-implement the sorting algorithm itself, but more want to look at the quality of code, extensibility, etc.
More basic questions like the one in the OP, or fizzbuzz, are decent as well, and a big step up from lame questions like “what does SOLID stand for? What does the Liskov substitution principle mean to you?” Even if they’re not quite as valuable as a miniature project.
I think you can probably make the question a lot more interesting by asking them to implement max without using any branching syntax. I’m not saying that is necessarily a good interview question, but it is certainly more interesting. That might also be where some of the more esoteric answers are coming from.
In practice, the best interviews I’ve had usually set a slightly more complicated task as a do-in-your-own-time problem and then go through what you did in the actual interview.
The best interviews you’ve had are the ones where you’re doing free work on your own time?
“Work” is a debatable term. It’s not work that provides any direct value to the company, if that’s what you mean. But yes, it involves more effort on my part.
But yes. Not only does this method let me show that I’m good at what I do (far better than nonsense theory questions do), I have also found that companies that use this approach tend to come across as a better fit in other ways during the interview process.
For me, a good interview is a dialogue where the company representative shows me as much about the company as I do about me as a candidate. Take-home tasks are okay, I guess, but I suspect they might balk at me requesting they handle a mock HR issue, or whatever, for me!
Thief way is actually the best among all of these imo, in terms of readability and efficiency.
Not using thief is professional incompetence unless you’re doing something deeply cursed
Like pair programming.
Sometimes you need to minimize function calls in a tight loop, but otherwise yeah
Why would you be using JS in this scenario?
Node.js, electron 🤷♂️
Something has gone horribly wrong if you’re trying to do such optimisations when you’ve already chosen JavaScript…let alone Electron.
And yet it happens, just look at the molasses that is Teams
Thankfully the only interaction I have with teams is when a supplier arranges the call. Once every two weeks. It grosses me out every time…and that’s the Web app.
Do you really think they have done such optimisation efforts as minimising function calls? I can’t imagine it’s required for what is actually a fairly simple frontend app. The complexity is the enabling stack on the backend.
I was under the impression that modern compilers just inline something like that, and even in older languages (like C) use trickeries are used to inline it (typically MAX is a macro rather than a real function, so its always inlined)
Ultimatelly it depends not just on what you’re doing but also the language and compiler you’re using.
If you’re optimizing that hard you should probably sort the data first anyway, but yeah, sometimes it’s absolutely called for. Not that I’ve actually needed that in my professional career, but then again I’ve never worked close enough to metal for it to actually matter.
That said, all of these are implemented as functions, so they’re already costing the function call anyway…
Sometimes, but practically never. Just be a thief.
Fr. People like to reimplement wheels tho
They’re setting a variable to a function. Just use the original function. All thief does is obfuscate for literally no gain except character count.
I presumed it to be a standin for just directly using Math.max, since there’s no nice way to show that in a valid code snippet
well it’s called Thief. They’re stealing the function and making it look like they wrote it. hence
max1
.Yeah, that’s my reading as well.
Sounds good to me
TDD
const max12 = (x, y) => { if (x === 1 && y === 2) { return 2; } else if (x === 7 && y === 4) { return 7; } else { return x; } };
Thief. Writing code is for chumps, and the more code you right, the more of a chump you are.
why say many word when few do trick
Why 🗣️📈 word when 😃👍
Writing code is for chumps, and the more code you right, the more of a chump you are.
So you’re the one in there wronging up my code?
It’s too late now to wright my wrongh
¯\_(ツ)_/¯
Where’s the Julia programmer that hits every one of these with @benchmark and then works for six hours to shave three nanoseconds off of the fastest one?
(Example: https://discourse.julialang.org/t/faster-bernoulli-sampling/35209)
404 or walled
works here
Mathematician 2 kinda blew my mind, kinda obvious, just can’t believe I was never taught or thought about it.
Lost me when it used Math.abs after calling math.max a their
Math.Sqrt((x-y) * (x-y))
(I’ve actually seen someone use this)
Yeah, that was my favorite one
I’ve been staring at it for 10 minutes and I’m still not convinced it works.
Simple, really. Abs(x-y) is the difference between the two numbers, absolute, so positive value. So, adding abs(x-y) to the smaller of the two numbers turns it into the bigger number. Plus the bigger number, now you have 2 times the bigger number
Thief no doubt
Thief gang. Why stand on shoulders of giants if you’re not using it to your advantage?
Procrastinator + troll.
return x
Bit hacker 2 is really fascinating. It uses a bit mask of all 1s (-1) or all 0s (0) and takes advantage of the fact that y ^ (x ^ y) = x and y ^ 0 = y
wtf kind of cursed programming language is this? JS? it’s so ugly, in no universe should a function look like that
but obviously as a rust enjoyer i have to do it like
fn max ⟨T: PartialOrd + Copy⟩(nums: ⁊[T]) -> Option⟨T⟩ { let mut greatest: ⁊T = ⁊nums[0]; match nums.len() { 0 => None, 1 => Some(*greatest), _ => { for num in nums { if num > greatest { greatest = num; } } Some(*greatest) } } }
edit: lemmy formatting REALLY hates references and generics it seems… time to go back to medieval times
Ah yes, rust. The language that somehow manages to manages to as verbose as possible, with as much jargonized shorthand that a computer could handle.
Exactly, I don’t understand why languages have decided that every keyword needs to be as randomly minified as possible.
fn
,def
,rune
(ok that’s not minified, just a dumb name),fmt
,std
. Many of these things aren’t new, but programmers recognize descriptive variable names are important, the same should be true for keywords.
Isn’t it php?
#define max(x,y) ( { __auto_type __x = (x); __auto_type __y = (y); __x > __y ? __x : __y; })
GNU C. Also works with Clang. Avoids evaluating the arguments multiple times. The optimizer will convert the branch into a conditional move, if it doesn’t I’d replace the ternary with the “bit hacker 2” version.
deleted by creator
__auto_type
is a compiler builtin, not a library function. It’s not a function at all, the parentheses are for precedence & grouping.
Mathematician 3
Max(x, y) = floor(ln(e^x + e^y))
so 0.3 ~= 1-ln(2)=max(1-ln(2),1-ln(2)) = floor(ln(2*e^(1-ln(2)))) = floor(ln(2)+(1-ln(2))) = 1 ?
That would bee engeneer 2, not Mathematician3 xD.
Just out of curiostity, what was you Idea behind that?
Guess only work with integers, specially for the floor function that is going to give you an integer at the end everytime.
Not my idea, learned it somewhere while doing college in an statistics class. The idea is that the exponential function grow really fast, so small difference on variables become extreme difference on the exponential, then the log function reverse the exponential, but because it grew more for the biggest variable it reverts to the max variable making the other variables the decimal part (this is why you need the floor function). I think is cool because works for any number of variables, unlike mathematician 2 who only work for 2 variables (maybe it can be generalized for more variables but I don’t think can be done).
For a min fuction it can be use ceiling(-ln(e^-x + e^-y))
to be fair it does seem to work for any two numbers where one is >1. As lim x,y–> inf ln(ex+ey) <= lim x,y --> inf ln(2 e^(max(x,y))) = max(x,y) + ln(2).
I think is cool because works for any number of variables
using the same proof as before we can see that: lim,x_i -->inf ln(sum_i/in I} e^(x_i)) <= ln(.
So it would only work for at most [base of your log, so e<3 for ln] variables.
After searching a little, I found the name of the function and it’s proof: https://en.wikipedia.org/wiki/LogSumExp
thanks for looking it up:).
I do think the upper bound on that page is wrong thought. Incedentally in the article itself only the lower bound is prooven, but in its sources this paper prooves what I did in my comment before as well:
for the upper bound it has max +log(n) . (Section 2, eq 4) This lets us construct an example (see reply to your other comment) to disproove the notion about beeing able to calculate the max for many integers.
I just remembered where I learned about that function, in this course on convex optimization that unfortunately I never had the opportunity to finishing it but is really good.
I don’t have a mathematical proof, but doing some experimental tests on excel, using multiple (more than 3) numers and using negative numbers (including only negative numbers) it works perfectly every time.
Try (100,100,100,100,100,101) or 50 ones and a two, should result in 102 and 4 as a max respectively. I tried using less numbers, but the less numbers you use, the higher the values (to be exact less off a deviation(%-difference) between the values, resulting in higher numbers) have to be and wolframAlpha does not like 10^100 values so I stopped trying.
Removed by mod
Procrastinator.
Okay, but seriously: “Thief”. Why reimplement it if it’s already available in the language?
Thief
And not feeling one byte bad about it :p