

If Justin Trudeau gets active on this, you can probably get it to Gulf of Canada (Gulf of Mexico) (Gulf of America).
If Justin Trudeau gets active on this, you can probably get it to Gulf of Canada (Gulf of Mexico) (Gulf of America).
One cannot help but notice that NBC is directing viewers to said website rather than other websites that are not owned by Kanye West and are not selling swastika T-shirts, which probably have a harder time getting potential customers to show up. I see that they also have pricing information and product photos.
One imagines that social media and similar probably also is directing a lot of people there.
This does suggest that West might be onto something.
In a world where people and organizations didn’t discuss and link to the product, he might not have the same incentive.
Well, that’s potentially concerning if you work in the industrial sector. From a standpoint of the country overall, though, if you expect Germany to transition to the tertiary sector making up a larger portion of the economy, as countries generally do as they develop…shrugs
Wouldnt the sync option also confirm that every write also arrived on the disk?
If you’re mounting with the NFS sync option, that’ll avoid the “wait until close and probably reorder writes at the NFS layer” issue I mentioned, so that’d address one of the two issues, and the one that’s specific to NFS.
That’ll force each write to go, in order, to the NFS server, which I’d expect would avoid problems with the network connection being lost while flushing deferred writes. I don’t think that it actually forces it to nonvolatile storage on the server at that time, so if the server loses power, that could still be an issue, but that’s the same problem one would get when running with a local filesystem image with the “less-safe” options for qemu and the client machine loses power.
What you’re describing is called a linter, and they’ve existed for ages.
Yup, and I’ve used them, but they’ve had hardcoded rules, not had models just trained on code.
The only place I’ve seen prices listed that high in the US is in California.
California apparently has some sort of minimum cage size mandate that a lot of the rest of the US doesn’t, so can’t pull in eggs from the rest of the US, which apparently contributes to California’s problems, since it fragments the US market. Which is probably pretty great if you’re an egg producer in California who hasn’t been hit by bird flu – you’ve got a protected market, and a lot of your competition has been wiped out – but sucks if you’re an egg consumer.
Bird flu continues to play a part in higher egg prices in California.
The U.S. Department of Agriculture, in a Jan. 10 report, said a dozen large shell eggs in the state rose to $8.97.
Some states, like California, are being hit especially hard by the egg crunch, and part of that is likely a result of state-level legislation.
California’s Proposition 12, also called the Farm Animal Confinement Initiative, places restrictions on how hens, sows and veal calves can be kept.
The bill, which took effect in recent years, in part banned confinement of egg-laying hens (chickens, turkeys, ducks, geese and guinea fowl) in certain areas with less than 1 square foot of usable floor space per hen.
Other states, including Arizona, Colorado, Massachusetts, Michigan, Nevada, Oregon and Washington, have similar laws that specifically provide animal welfare protections to egg-laying hens.
That limits how eggs can be produced and what can be sold in each state. Those that allow only cage-free products already face fewer suppliers and farms (a little more than a third of U.S. egg layers are cage-free, according to the USDA). Manufacturers and sellers also are facing a slowdown as they change operations to comply with such laws.
That’s designed to work at 120V. The PSU-GPU connector is 12V. I don’t know if it’d actually work well – like, the contacts would have a tenth the conductive capacity, I guess.
Honestly, the main standardized 12V DC connector that I can think of that we use is the car cigarette lighter, which I don’t think normally moves anything like that much power and is terrible, doesn’t lock into place, was never intended as a power source. I would like a 12V locking connector that can move a lot of juice.
https://www.amazon.com/JacobsParts-Cigarette-Lighter-Adapter-Electronics/dp/B012UV3QI4
Input Voltage: 12 Volts
Amperage: 2 Amps
That particular cable and plug will handle 24 watts. I know that you can get higher power ones – I had to go out of my way to find one that could do 100W.
My guess is that the 12V problem will never really be addressed and we’ll just go to USB-C PD at up to 50.9V for our DC power connector standard. Which I guess works too as long as the amperage doesn’t get too high, but that won’t be enough to feed a current high-end Nvidia GPU.
Maybe have, like, multiple USB-C PD connectors in parallel. Three should do it.
Find out what people are upset about. Go put on a large performance and get in the news in such a way that you give the impression that you’re doing that. Most people will not go looking for data, rely on their gut impression from the tone of news coverage.
His campaign for his first term was primarily run on protectionist US trade relations and stopping illegal immigration via building a wall.
He put on a major act out of “killing” TTIP and TPP, which had negotiations that had already failed. He slightly tweaked and renamed NAFTA. He used very aggressive phrasing, broke norms for communication, acted extreme, kept himself constantly in the news, and after the fact, stated that he did a great job. Few people go look at actual data. People who wanted these things had the vague idea that he’d dramatically changed things. What he’d done was to provide them with political theater aimed at giving voters the impression that he’d done that.
He spent a lot of time in the news talking about The Wall, fighting for The Wall. The typical voter got the impression that he was going to build a wall along the whole border, and talking about how he wouldn’t spend anything on it, how Mexico would pay for it. He broke social norms for communication, used aggressive phrasing, acted extreme, kept himself constantly in the news. The federal government paid to build a little wall, principally maintenance of existing fencing. Mexico did not pay for it. What he’d done was to provide them with political theater aimed at giving voters the impression of what he’d done.
He spent a lot of time in the news talking about illegal immigrants, fighting them, staying in the news on it. He broke social norms for communication, used aggressive phrasing, acted extreme, kept himself constantly in the news. And what actually happened:
In 2020, the removal of illegal immigrants from the interior of the United States was the lowest as an absolute number and as a share of the illegal immigration population since ICE was created in 2003
What he’d done was to provide voters with political theater aimed at giving voters the impression that that’s what he’d done.
Right now, Trump is constantly in the news for breaking communication norms, using aggressive phrasing, constantly staying in the news over eliminating waste. I think that Musk did say that he’d eliminated some dollar amount of waste, albeit without specifics as to that waste. Ditto for illegal immigrant deportations – lots of photographs and strong phrasing, not a lot of data. Probably other stuff, didn’t pay attention to his campaigning for second term, so don’t know what supporters were promised. Oh, right, foreign aid.
https://www.brookings.edu/articles/what-every-american-should-know-about-u-s-foreign-aid/
Opinion polls consistently report that Americans believe foreign aid is in the range of 25 percent of the federal budget. When asked how much it should be, they say about 10 percent. In fact, at $39.2 billion for fiscal year 2019, foreign assistance is less than 1 percent of the federal budget.
This means that a lot of Americans feel that they’re being taken advantage of, are upset about that 25% of the federal budget.
Trump “destroyed” USAID, was constantly in the news for doing so, said a lot of extreme things. Musk said something about “feeding it into the wood chipper”. In practice, he was in the news for suspending operations and a small number of layoffs, and IIRC there was a reorganization that placed it under State, so for some definitions, it ceased to exist. I have not seen anything about long-run actual dollar value change to foreign aid, but if I had to guess based on his past MO, it won’t change a lot. But I bet that at the end of his term, a lot of voters will have the impression that it has.
NFS doesn’t do snapshotting, which is what I assumed that you meant and I’d guess ShortN0te also assumed.
If you’re talking about qcow2 snapshots, that happens at the qcow2 level. NFS doesn’t have any idea that qemu is doing a snapshot operation.
On a related note: if you are invoking a VM using a filesystem images stored on an NFS mount, I would be careful, unless you are absolutely certain that this is safe for the version of NFS and the specific caching options for both NFS and qemu that you are using.
I’ve tried to take a quick look. There’s a large stack involved, and I’m only looking at it quickly.
To avoid data loss via power loss, filesystems – and thus the filesystem images backing VMs using filesystems – require write ordering to be maintained. That is, they need to have the ability to do a write and have it go to actual, nonvolatile storage prior to any subsequent writes.
At a hard disk protocol level, like for SCSI, there are BARRIER operations. These don’t force something to disk immediately, but they do guarantee that all writes prior to the BARRIER are on nonvolatile storage prior to writes subsequent to it.
I don’t believe that Linux has any userspace way for an process to request a write barrier. There is not an fwritebarrier()
call. This means that the only way to impose write ordering is to call fsync()/sync() or use similar-such operations. These force data to nonvolatile storage, and do not return until it is there. The downside is that this is slow. Programs that are frequently doing such synchronizations cannot issue writes very quickly, and are very sensitive to latency to their nonvolatile storage.
From the qemu(1)
man page:
By default, the cache.writeback=on mode is used. It will report data writes as completed as soon as the data is present in the host page cache. This is safe as long as your guest OS makes sure to correctly flush disk caches where needed. If your guest OS does not handle volatile disk write caches correctly and your host crashes or loses power, then the guest may experience data corruption. For such guests, you should consider using cache.writeback=off. This means that the host page cache will be used to read and write data, but write notification will be sent to the guest only after QEMU has made sure to flush each write to the disk. Be aware that this has a major impact on performance.
I’m fairly sure that this is a rather larger red flag than it might appear, if one simply assumes that Linux must be doing things “correctly”.
Linux doesn’t guarantee that a write to position A goes to disk prior to a write to position B. That means that if your machine crashes or loses power, with the default settings, even for drive images sorted on a filesystem on a local host, with default you can potentially corrupt a filesystem image.
https://docs.kernel.org/block/blk-mq.html
Note
Neither the block layer nor the device protocols guarantee the order of completion of requests. This must be handled by higher layers, like the filesystem.
POSIX does not guarantee that write() operations to different locations in a file are ordered.
https://stackoverflow.com/questions/7463925/guarantees-of-order-of-the-operations-on-file
So by default – which is what you might be doing, wittingly or unwittingly – if you’re using a disk image on a filesystem, qemu
simply doesn’t care about write ordering to nonvolatile storage. It does writes. it does not care about the order in which they hit the disk. It is not calling fsync()
or using analogous functionality (like O_DIRECT
).
NFS entering the picture complicates this further.
https://www.man7.org/linux/man-pages/man5/nfs.5.html
The sync mount option The NFS client treats the sync mount option differently than some other file systems (refer to mount(8) for a description of the generic sync and async mount options). If neither sync nor async is specified (or if the async option is specified), the NFS client delays sending application writes to the server until any of these events occur:
Memory pressure forces reclamation of system memory resources. An application flushes file data explicitly with sync(2), msync(2), or fsync(3). An application closes a file with close(2). The file is locked/unlocked via fcntl(2). In other words, under normal circumstances, data written by an application may not immediately appear on the server that hosts the file. If the sync option is specified on a mount point, any system call that writes data to files on that mount point causes that data to be flushed to the server before the system call returns control to user space. This provides greater data cache coherence among clients, but at a significant performance cost. Applications can use the O_SYNC open flag to force application writes to individual files to go to the server immediately without the use of the sync mount option.
So, strictly-speaking, this doesn’t make any guarantees about what NFS does. It says that it’s fine for the NFS client to send nothing to the server at all on write(). The only time a write() to a file makes it to the server, if you’re using the default NFS mount options. If it’s not going to the server, it definitely cannot be flushed to nonvolatile storage.
Now, I don’t know this for a fact – would have to go digging around in the NFS client you’re using. But it would be compatible with the guarantees listed, and I’d guess that probably, the NFS client isn’t keeping a log of all the write()s and then replaying them in order. If it did so, for it to meaningfully affect what’s on nonvolatile storage, the NFS server would have to fsync() the file after each write being flushed to nonvolatile storage. Instead, it’s probably just keeping a list of dirty data in the file, and then flushing it to the NFS server at close().
That is, say you have a program that opens a file filled with all ‘0’ characters, and does:
At close() time, the NFS client probably doesn’t flush “1” to position 1, then “1” to position 5000, then “2” to position 1, then “2” to position 5000. It’s probably just flushing “2” to position 1, and then “2” to position 5000, because when you close the file, that’s what’s in the list of dirty data in the file.
The thing is that unless the NFS client retains a log of all those write operations, there’s no way to send the writes to the server in a way that avoid putting the file into a corrupt state if power is lost. It doesn’t matter whether it writes the “2” at position 1 or the “2” at position 5000. In either case, it’s creating a situation where, for a moment, one of those two positions has a “0”, and the other has a “2”. If there’s a failure at that point – the server loses power, the network connection is severed – that’s the state in which the file winds up in. That’s a state that is inconsistent, should never have arisen. And if the file is a filesystem image, then the filesystem might be corrupt.
So I’d guess that at both of those two points in the stack – the NFS client writing data to the server, and the server block device scheduler, permit inconsistent state if there’s no fsync()/sync()/etc being issued, which appears to be the default behavior for qemu
. And running on NFS probably creates a larger window for a failure to induce corruption.
It’s possible that using qemu’s iSCSI backend avoids this issue, assuming that the iSCSI target avoids reordering. That’d avoid qemu going through the NFS layer.
I’m not going to dig further into this at the moment. I might be incorrect. But I felt that I should at least mention it, since filesystem images on NFS sounded a bit worrying.
This shift follows China’s diplomatic push targeting the Global South
Trump’s return signals a firmer stance
Trump cutting USAID probably isn’t helping if the metric here is “who are countries in the Global South more likely to pay attention to”.
US President Donald Trump said he is committed to “buying and owning” Gaza and that other countries in the Middle East could help to rebuild it.
Last term, he spent a large portion of his time vocally committing to building a wall and having Mexico pay for it, so I guess that’s about par for the course.
“Tech workers” is pretty broad.
Tech Support
There are support chatbots that exist today that act as a support feature for people who want to ask English-language questions rather than search for answers. Those were around even before LLMs, could work on even simpler principles. Having tier-1 support workers work off a flowchart is a thing, and you can definitely make a computer do that even without any learning capability at all. So they definitely can fill some amount of role. I don’t know how far that will go, though. I think that there are probably going to be fundamental problems with novel or customer-specific issues, because a model just won’t have been trained on it. I think that it’s going to have a hard time synthesizing an answer from answers to multiple unrelated problems that it might have in its training corpus. So I’d say, yeah, to some degree, and we’ve successfully used expert systems and other forms of machine learning in the past to automate some basic stuff here. I don’t think that this is going to be able to do the field as a whole.
Writing software
Can existing LLM systems write software? No. I don’t think that they are an effective tool to pump out code. I also don’t think that the current, “shallow” understanding that they have is amenable to doing so.
I think that the things that LLMs work well at is in producing stuff that is different, but appears to a human to be similar to other content. There are a variety of uses that that work, to varying degrees, for content consumed by humans.
But humans deal well with errors in what we see. The kinds of errors in AI-generated images aren’t a big issue for us – they just need to cue up our memories of things in our head. Programming languages are not very amenable to that. And I don’t think that there’s a very effective way to lower that rate.
I think that it might be possible to make use of an LLM-driven “warning” system when writing software; I’m not sure if someone has done something like that. Think of something that works the way a grammar checker does for natural language. Having a higher error rate is acceptable there. That might reduce the amount of labor required to write code, though I don’t think that it’ll replace it.
Maybe it’s possible to look for common security errors to flag for a human by training a model to recognize those.
I also think that software development is probably one of the more-heavily-automated fields out there because, well, people who write software make systems to do things over and over. High-level programming languages rather than writing assembly, software libraries, revision control…all that was written to automate away parts of tasks. I think that in general, a lot of the low-hanging fruit has been taken.
Does that mean that I think that software cannot be written by AI? No. I am sure that AI can write software. But I don’t think that the AI systems that we have today, or systems that are slightly tweaked, or systems that just have a larger model, or something along those lines, are going to be what takes over software development. I also think that the kind of hurdles that we’d need to clear to really fully write software from an AI require us to really get near an AI that can do anything that a human can do. I think that we will eventually get there, and when we get there, we’ll see human labor in general be automated. But I don’t think that OpenAI or Microsoft are a year away from that.
System and network administration
Again, I’m skeptical that interacting with computers is where LLMs are going to be the most-effective. Computers just aren’t that tolerant of errors. Most of the things that I can think of that you could use an AI to do, like automated configuration management or something, already have some form of automated tools in that role.
Also, I think that obtaining training data for this corpus is going to be a pain. That is, I don’t think that sysadmins are going to generally be okay with you logging what they’re doing to try to build a training corpus, because in many cases, there’s potential for leaks of sensitive information.
And a lot of data in that training corpus is not going to be very timeless. Like, watching someone troubleshoot a problem with a particular network card…I’m not sure how relevant that’s going to be for later hardware.
Quality Assurance
This involves too many different things for me to make a guess. I think that there are maybe some tasks that some QA people do today that an LLM could do. Instead of using a fuzzer to throw input in for testing, maybe have an AI to predict what a human would do.
Maybe it’s possible to build some kind of model mapping instructions to operations with a mouse pointer on a screen and then do something that could take English-language instructions to try to generate actions on that screen.
But I’ve also had QA people do one-off checks, or things that aren’t done at mass scale, and those probably just aren’t all that sensible to automate, AI or no. I’ve had them do tasks in the real world (“can you go open up the machine seeing failures and check what the label on that chip on the machine that’s getting problems reads, because it’s reporting the same part number in software”). I’ve written test plans for QA to run on things I’ve built, and had them say “this is ambiguous”. My suspicion is that an LLM trained on what information is out there is going to have a hard time, without a deep understanding of a system, to be able to say “this is ambiguous”.
Overall
There are other areas. But I think that any answer is probably “to some degree, depending upon what area of tech work, but mostly not, not with the kind of AI systems that exist today or with minor changes to existing systems”.
I think that a better question than “can this be done with AI” is “how difficult is this job to do with AI”. I mean, I think that eventually, pretty much any job could probably be done by an AI. But I think that some are a lot harder than others. In general, the ones that are more-amenable are, I think, those where one can get a good training corpus – a lot of recorded data showing how to do the task correctly and incorrectly. I think that, at least using current approaches, tasks that are somewhat-tolerant of errors are better. For any form of automation, AI or no, tasks that need to be done repeatedly many times over are more-amenable to automation. Using current approaches, problems that can be solved by combining multiple things from a training corpus in simple ways, without a deep understanding, not needing context about the surrounding world or such, are more amenable to being done by AI.
Too bad there aren’t any non-profit search engines you could promote instead of the one that charges people money in order to make a profit.
If I remember from prior discussions, you prefer Duck Duck Go. If you want to mention that you use Duck Duck Go, I have no problem with that. I think that that’s great.
giving a corporation that would happily fuck you over in a second like every other corporation
I think that Kagi has considerably less-incentive to do so than Duck Duck Go does, because they have a viable revenue model that doesn’t involve datamining me the way Google does or showing ads to me the way Duck Duck Go does. Yes, you can use an ad-blocker on Duck Duck Go, but then you’re offloading the costs onto other users who don’t do that, and in the long run, Duck Duck Go has an incentive to block users using ad blockers.
You may disagree with my assessment. But I’ve made that decision, I’m happy with it, I like the fact that Kagi added a Threadiverse search feature, and I am not going to change search engines to your favorite search engine, nor do I intend to stop telling people that I use Kagi.
You’re free to comment every time if you want. I have no intention to change what I am doing, because I happen to like them, and my use of the term predates my use of that engine – I wrote googles prior to this.
If you want to ban me because you cannot tolerate my writing style, do so, and I’ll go use communities other than those you that you moderate. Trying to harass me into changing what I write is not going to have an effect.
Honestly, I’m a little surprised that a smartphone user wouldn’t have familiarity with the concept of files, setting aside the whole familiarity-with-a-PC thing. Like, I’ve always had a file manager on my Android smartphone. I mean, ok…most software packages don’t require having one browse the file structure on the thing. And many are isolated, don’t have permission to touch shared files. Probably a good thing to sandbox apps, helps reduce the impact of malware.
But…I mean, even sandboxed apps can provide file access to the application-private directory on Android. I guess they just mostly don’t, if the idea is that they should only be looking at files in application-private storage on-device, or if they’re just the front end to a cloud service.
Hmm. I mean, I have GNU/Linux software running in Termux, do stuff like scp
from there. A file manager. Open local video files in mpv
or in PDF viewers and such. I’ve a Markdown editor that permits browsing the filesystem. Ditto for an org-mode editor. I’ve a music player that can browse the filesystem. I’ve got a directory hierarchy that I’ve created, though simpler and I don’t touch it as much as on the PC.
But, I suppose that maybe most apps just don’t expose it in their UI. I could see a typical Android user just never using any of the above software. Not having a local PDF viewer or video player seems odd, but I guess someone could just rely wholly on streaming services for video and always open PDFs off the network. I’m not sure that the official YouTube app lets one actually save video files for offline viewing, come to think of it.
I remember being absolutely shocked when trying to view a locally-stored HTML file once that Android-based web browsers apparently didn’t permit opening local HTML files, that one had to set up a local webserver (though that may have something to do with the fact that I believe that by default, with Web browser security models, a webpage loaded via the file://
URI scheme has general access to your local filesystem but one talking to a webserver on localhost does not…maybe that was the rationale).
I’m fairly sure that I’ve read some articles by this guy on Forbes before, because I remember that he had some article on something, many years back, that I really liked and I distinctly remember thinking that his thumbnail looked kind of frumpy. I believe that he’s British. Looks like he hasn’t been at Forbes in almost a decade, though.
kagis
Yeah, was apparently born in Torquay, England.
https://www.timworstall.com/2008/07/about-tim-worstall/
I was born in Torquay in 1963, grew up mostly in Bath (with a couple of years in Naples, Italy as a result of my father\’s Naval career) and was educated at Downside Abbey.
EDIT: Also, a “ton” and a “tonne” aren’t the same thing – that’s not just dialect. A “ton” here in the US means a short ton, 2000 US lbs. A “tonne” is a metric ton, 1000 kg. I don’t know what Brits normally mean if they write “ton”, whether it’s a short ton or a long ton or metric ton. In the US, we’d normally write “metric ton” instead of “tonne”.
If we want to have artificially-generated demand for zinc – if we really need to ensure domestic production capacity – there’s no requirement for it to be the penny. I’m sure that we can find something else to make out of zinc.
The penny itself wasn’t always zinc. I don’t remember the changeover year.
checks WP
https://en.wikipedia.org/wiki/Penny_(United_States_coin)
The current copper-plated zinc cent issued since 1982 weighs 2.5 grams, while the previous 95% copper cent still found in circulation weighed 3.11 g (see further below).
https://en.wikipedia.org/wiki/Zinc
Zinc is most commonly used as an anti-corrosion agent,[123] and galvanization (coating of iron or steel) is the most familiar form. In 2009 in the United States, 55% or 893,000 tons of the zinc metal was used for galvanization.[122]
Zinc is more reactive than iron or steel and thus will attract almost all local oxidation until it completely corrodes away.
We can just subsidize zinc production, or purchase something that requires those anti-corrosion properties.
I also am not at all sure that this was in fact the rationale. I can’t find a reference online to this being the rationale. I do see reference to zinc being useful because it’s particularly inexpensive. And the numbers given on this article seem to support the idea that pennies don’t really work out to generating a very substantial demand for zinc.
It’s Not Big Zinc Behind The Campaign To Keep The Penny
To run through the numbers, a penny coin weighs 2.5 grammes. Let’s call that all zinc (it’s not, but close enough). There’s 5 billion made a year, meaning that we’ve got 12,500,000,000 grammes, or divide by a million to get 12,500 tonnes. Now, if that were 12,500 tonnes of gold being made into coins every year, with global virgin production being around 3,000 tonnes, then sure, that would be a contract worth, umm, influencing the political process, to secure and keep running. The same would be true of many metals in fact. But it’s just not true of the zinc industry. Using the USGS, the correct source for these sorts of numbers, we find that US production of zinc is around 250,000 tonnes a year, global production 13.5 million. Even if we assume (as we might, sounds like the sort of thing that might be true) that US coins must be made of US produced metal this is still a very marginal part of the total market.
Further, zinc runs about $2,200 a tonne at present, meaning that we’re talking about maybe $25 million a year as the zinc cost of our pennies. And we’re told who and how much is paid to keep lobbying for the penny:
But his written statement did not mention that Weller is actually a lobbyist and head of strategic communications for Dentons, a law firm representing the interests of zinc producer Jarden Zinc Products, a major provider of coin blanks that are made into currency.
…
Jarden Zinc Products spent $1.5 million from 2006 through the first quarter of 2014 lobbying on such things as “issues related to the one-cent coin” and represented by Weller when he worked at B&D Consulting and, more recently, Dentons.
No, the important point here is not the zinc industry, nor “Big Zinc”. The important part is this “a major provider of coin blanks”. If your business is making coin blanks then obviously you’re very interested in the continued existence of coin demoninations that are made from coin blanks. That they’re made from zinc is an irrelevance compared to that.
Believe me, you don’t spend the best part of $200,000 a year in lobbying expenses in order to sell $25 million’s worth of zinc. This metal is a commodity, you can sell that amount in one ten minute phone call to any London Metals Exchange ring member. Heck, give me a couple of days to get organised and I could sell it for you at the market price. I’d also charge rather less than $200,000 to do it.
EDIT: WP seems to also support the author’s argument.
https://en.wikipedia.org/wiki/Jarden_Zinc_Products
The company has resisted past efforts to eliminate the penny in the United [1] through an astroturf lobby organization called Americans for Common Cents.
The company’s largest source of revenue comes from the production of coin blanks, having produced over 300 billion blanks at their Tennessee facility.
The penny, one of the first coins made by the U.S. Mint after its establishment in 1792, now costs more than two cents to produce, Trump said in a post on his Truth Social site shortly after departing the Super Bowl game in New Orleans.
“For far too long the United States has minted pennies which literally cost us more than 2 cents. This is so wasteful!” Trump wrote. “I have instructed my Secretary of the US Treasury to stop producing new pennies.”
While I think that it’s probably something that we should have done a long time ago, I don’t think that the major cost is actually the fact that the production cost is higher than the coin’s face value, but from the cost of needing to handle and process pennies.
ProjectM is literally the modern incarnation of Milkdrop, and it’s packaged in Debian, so I assume all child distros.
EDIT: If you want something more expensive and elaborate than a picture on a computer display, look into DMX512. That’s the electrical standard for connecting the kind of controlled lighting systems that are used in clubs and such. You get a DMX512 transciever, plug it into a computer, hook up software, and attach DMX512 hardware. Strobes, computer-directed spotlights doing patterns, color-changing lights, etc. You can find YouTube videos of people using systems like that to drive lighting displays on houses synchronized with audio, stuff like this:
https://en.wikipedia.org/wiki/Jerry_Lawson_(engineer)