Dealing with this right now. Dog is super cute. It is still a terrible decision for my family, and that’s not the dog’s fault.
Dealing with this right now. Dog is super cute. It is still a terrible decision for my family, and that’s not the dog’s fault.
I think this supports his argument. Having to research desktop environments to decide which is optimized for the potential problems a new user may face, then finding a distro that packages that DE is quite frankly too much for the average user.
I’d argue between 3% and 5% of PC users are willing to research and experiment to find the flavor of Linux that truly works for them.
Linux has come a long way, I still remember using Gentoo as a daily driver and seeing Linux cross 1% of desktop share, but the average desktop user doesn’t know the difference between a kernel and a colonel, and they don’t want to.
Fuck HP. My wife has an HP printer at work that she can’t print to without an app.
The app drains her iPad battery in 4 hours so she had to remove it but kept it on her phone.
She can’t print to our Brother at home because the app intercepts the share/print capability.
Such a piece of shit company.
If LLMs were accurate, I could support this. But at this point there’s too much overtly incorrect information coming from LLMs.
“Letting AI scrape your website is the best way to amplify your personal brand, and you should avoid robots.txt or use agent filtering to effectively market yourself. -ExtremeDullard”
isn’t what you said, but is what an LLM will say you said.
Below the elite level, relative skill differences can be large enough that a skilled cis women can outcompete a lesser skilled cis men. And that’s where 99% of sports are played so these rules/laws just serve to make cis men not feel threatened by potentially losing in a softball game to a woman.
At the more elite levels, though, the skill gaps are much smaller, and being faster or stronger are the difference. Most WNBA players can’t dunk, most NBA players can. Elite men run 100M a full second faster than elite women. At those levels, men have a distinct physical advantage.
There have been some studies indicating trans women still have higher lung capacity than cis women, more strength etc, but there’s still some uncertainty because the number of studies are limited, and there’s even one study that indicated cis women may have an advantage over trans women.
But considering the laws currently being passed, they aren’t targeting elite athletes, and are instead targeting kids, and not out of the spirit of competition, but out of hate.
Just piling on at this point, but we made 2 changes last spring that made summer so much more tolerable in our house.
And we haven’t found ourselves needing it, but a mini split has popped up a lot here already and is a great idea.
I used to be in credit risk for a very large stock market company.
Calling the bottom of the market is the same as betting big and getting 21 in blackjack.
Super cool when it happens, but not skill. The number of grown men I had to hear crying because they were dollar cost averaging down to the bottom until they went broke still disturbs me.
I’m happy this worked for you, but it was not skill.
Just looking at employers in my professional career. Two. One for 15 years then the current for 3.
Looking at my direct and diagonal leaders, they seem to average 3-5 years a role, and I consider staying with my prior employer for so long a mistake. I made career progression and promotions there, but it still slowed me down vs changing employers.
Sure, self-hosting is a great option for very large projects, but a random python library to help with an analytics workflow isn’t going to self-host. Those projects, along with 27,999,990 others have chosen GitHub, often times explicitly to reduce the barrier to contribution.
Also, all of those examples are built on thousands of other FOSS projects, 99% of which aren’t self-hosting. This is the same as arguing only Amazon is a bookseller and ignoring the thousands of independent book publishers creating the books Amazon is selling.
GitHub has 28 million public repos
Gitlab is has less than an order of magnitude as many Under a million in 2020, and nearly 80% without FOSS license.
Is it everyone’s favorite, or best, or most feature rich. Nah. Is it where the FOSS projects are. Yes.
This is what republicans have always done well. They organize locally, take over school boards and city councils, drive the change they want to see in local communities and drive support locally to drive voter turn out nationally.
We don’t see democrats crashing school board and city council meetings, participating in local politics en mass to drive local change with near the same effectiveness as republicans, and it leads to underwhelming participation in national elections, as the left sits around wondering “what has the DNC done for me”.
Lots of boring applications that are beneficial in focused use cases.
Computer vision is great for optical character recognition, think scanning documents to digitize them, depositing checks from your phone, etc. Also some good computer vision use cases for scanning plants to see what they are, facial recognition for labeling the photos in your phone etc…
Also some decent opportunities in medical research with protein analysis for development of medicine, and (again) computer vision to detect cancerous cells, read X-rays and MRIs.
Today all the hype is about generative AI with content creation which is enabled with Transformer technology, but it’s basically just version 2 (or maybe more) of Recurrent Neural Networks, or RNNs. Back in 2015 I remember this essay, The Unreasonable Effectiveness of RNNs being just as novel and exciting as ChatGPT.
We’re still burdened with this comment from the first paragraph, though.
Within a few dozen minutes of training my first baby model (with rather arbitrarily-chosen hyperparameters) started to generate very nice looking descriptions of images that were on the edge of making sense.
This will likely be a very difficult chasm to cross, because there is a lot more to human knowledge than thinking of the next letter in a word or the next word in a sentence. We have knowledge domains where, as an individual we may be brilliant, and others where we may be ignorant. Generative AI is trying to become a genius in all areas at once, and finds itself borrowing “knowledge” from Shakespearean literature to answer questions about modern philosophy because the order of the words in the sentences is roughly similar given a noun it used 200 words ago.
Enter Tiny Language Models. Using the technology from large language models, but hyper focused to write children’s stories appears to have progress with specialization, and could allow generative AI to stay focused and stop sounding incoherent when the details matter.
This is relatively full circle in my opinion, RNNs were designed to solve one problem well, then they unexpectedly generalized well, and the hunt was on for the premier generalized model. That hunt advanced the technology by enormous amounts, and now that technology is being used in Tiny Models, which is again looking to solve specific use cases extraordinarily well.
Still very TBD to see what use cases can be identified that add value, but recent advancements to seem ripe to transition gen AI from a novelty to something truly game changing.
Yep, we’re looking at that exact option right now. 6 free to see if it’s going to work then it’s time to max that deductible!
Been looking at therapists for my teenage daughter, she’s been debating therapy for a couple of years and has recently fully committed.
We have good insurance and are financially secure, and holy shit it’s still going to cost an extraordinary amount. I don’t understand how anyone struggling with financial insecurity could even consider having access to therapy as an option.
What a fundamentally broken system, there is not a single type of care that exists that is accessible to the people who need it.
The example that comes to mind is the Birthday Problem.
If you are in a room with 22 other people, there is a 22 in 365 chance one of them shares your birthday. Relatively unlikely. But there is a 50% chance there are two people in the room that share a birthday. Much more likely.
That jay will double cross the owl, they are not to be trusted.
As I’ve heard this explained, enterprise admins have scripts, and to a less important extent muscle memory, tied to Control Panel layout and command lines, and that’s not a group you want to irritate.
Yeah, model training is hard. Like capital H HARD. you need a bunch of data and it needs to be high quality.
New York is the financial center of USA, so separating finance jobs from job postings written by someone using New England vernacular is a step you need to go through to make sure your data is high enough quality.
So if you are just starting, use 20 newsgroups dataset in those links, it’s pretty good data with a ton of resources written about it. It’s not fun data, but it isn’t as likely to fall victim to biases in data you aren’t expecting.
Couple of options to start out with, Topic Labeling and Topic Extraction.
Topic Labeling is a classic example of supervised learning, or using ML with training data to classify new observations based on patterns found in training data.
Topic Extraction is a classic example of unsupervised learning, or attempting to identify patterns without training data.
I’m going to start with labeling, or classification here. There are plenty of tools to train a model to classify text in to categories, I’d recommend starting with this scikit-learn tutorial to see what’s involved before you start.
With any classification problem, you need good training data. You mentioned you’ve scraped 400 job postings, and I’m assuming you would want to using the job description to predict the job title. Some quick math, you’ll want to withhold 30% of your data to test your model, so that leaves 280 postings to train. I would recommend at least 100 descriptions per job title, so if you have 2-3 job titles, perfect, you’re ready to follow that tutorial with your own data!
If you have more that that, you probably won’t be able to do labeling/classification here, and will instead want to do topic extraction, where you’ll throw your walls of text at the machine and let the machine tell you the patterns it finds.
Topic modeling with spaCy and sci-kit learn is a great overview of this process, and plugging your own data in is pretty straightforward.
Both of these examples don’t even really scratch the surface of what’s possible with text based ML these days, but are perfectly viable tools to run quickly and on commodity hardware.
Elder millennial here. I had kids, my brother didn’t, and my kids, though young enough to change their minds, are adamant they won’t have kids.
I think the more interesting stat likely unfolding is the marked decrease of great grandparents in a generation.
To be clear this is not a “threat to society” or whatever, people can decide if they want kids or not. Just a shower thought.