

Anthropic’s uptime website is actually one of the funniest jokes of this year


Anthropic’s uptime website is actually one of the funniest jokes of this year


Most hardware is only really true if you account for older hardware in circulation, most new hardware will be shipping hardware decoder support for AV1.
On top of this, the software decoder support is remarkable for AV1, libdav1d is a marvelous piece of software, bringing access to a plethora of devices lacking hardware decoder support.


This is only really true if you have extreme throughput requirements, a regular VOD operation can get by fine on software encoding.
If you have the kind of throughput needs that warrant hardware encoders you’re going to want to go ASIC anyway, so regular server hardware won’t cut it. Like YouTube for example had to build their own ASICs because of the downright absurd scale they are running at
Pronounced “kloo-awk” approximately


I mean, that’s gotta be grounds for termination if anything


It’s a really dumb way to frame what the OpenAI people actually said on this - they are saying that the people applying to them want to know how many tokens they can use as a tool to accomplish the job they are applying for. There’s a fundamental difference to compensation here to compensation, where tokens as compensation would be how many tokens the people applying for the job would be able to utilize for their own purposes, whatever they may be.
To illustrate - I would probably be reluctant to work for a company which would not be willing to spend the amount of money that would get me a more or less top of the line computer with which to perform my job. Not because I consider my company-provided development machine as a part of my compensation - it is merely a tool I use for my job.
The people applying for these jobs are the kinds of people who think that burning an exorbitant amount of tokens will make them quite significantly more productive, so the metaphor of having the best tools available to accomplish the task at hand extends here, in accordance with their belief system.
There’s then the quote from the VC ghouls, but I don’t think anyone could accuse them of being competent to any significant degree, so their quotes are most appropriately used as toilet paper.


Sure, but that can be said about almost anything.
Still, I’d be surprised if they went the route of embedding ads into the stream, in part because of measurability/skipability/etc. It’s definitely not out of the question, but I think we’re still ways to go before we get there.
And even then, tools like yt-dlp would probably be able to apply some heuristics to figure out which segments are foreign to the stream and slice them out that way. Blocking yt-dlp would require DRM, which in turn requires changing the transcoding pipeline in a pretty non-trivial way. I also doubt they would willingly go this route.


That would mean reencodingevery video for every user and would need an insane amount of computing power.
You actually don’t have to, on account of how adaptive video streaming works. It’s fully possible to serve a few segments of ad content mid-stream.


This is my understanding as well, yeah.


Those will not block YT ads.
This is correct
but YouTube ads are delivered directly into the video stream.
This is false


Seems particularly high in levels of snake oil to be honest


RTOs are most often a “one free layoff”-card that businesses play, so firing someone for criticizing it is very much in line with the underlying intent of the policy.


Complete hands-off no-review no-technical experience vibe coding is obviously snake oil, yeah.
This is a pretty large problem when it comes to learning about LLM-based tooling: lots of noise, very little signal.


So far, there is serious cognitive step needed that LLM just can’t do to get productive. They can output code but they don’t understand what’s going on. They don’t grasp architecture. Large projects don’t fit on their token window.
There’s a remarkably effective solution for this, that helps both humans and models alike - write documentation.
It’s actually kind of funny how the LLM wave has sparked a renaissance of high-quality documentation. Who would have thought?


good benefits and perks.
Didn’t they literally just introduce free coffee at the office post-pandemic?


Not to say that I would willingly choose to work at Amazon, but I do know the reason. It’s documented here: https://www.levels.fyi/companies/amazon/salaries/software-engineer
This is immediately disprovable by anyone who has ever implemented push notifications on Android