Researchers used AI to design a new material that they used to build a working battery – it requires up to 70 percent less lithium than some competing designs.
Also, AI would have just sped up an existing plan they had to try new approaches because AI doesn’t create new ideas or think of things out of nowhere.
If you tell AI to do things within a certain range and it gives you results then AI came up with a design as much as google came up with search results when you put something into the search bar.
It can apply existing concepts in ways we haven’t thought of. AI has been used for exactly this thing for years in chemistry. When given constraints (less lithium) and parameters (with this much capacity) it can try permutations of various designs that theoretically meet those conditions.
Yes AI is overhyped, yes it’s often exaggerated by news sources, but that doesn’t mean AI is a non-invention or something. It’s a long way off from any of the lofty goals that are often thrown around by tech ceos, but that doesn’t mean it’s useless.
It can apply existing concepts in ways we haven’t thought of, like people do. AI has been used for exactly this thing for decades in chemistry. When given constraints (less lithium) and parameters (with this much capacity) it can try permutations of various designs that theoretically meet those conditions.
We have had weather models, astronomical models, and all other kinds of computer based prediction methods that do multiple permutations that theoretically meet conditions. AI is just another step forward by doing better pattern recognition and identifying relationships with data based on design choices. All of the chemistry findings came from the system being designed to try things they would not normally test for because testing is expensive and AI can run simulated tests faster and cheaper.
My point is that saying ‘AI came up with’ is 100% inaccurate phrasing intended to trick people into thinking that AI is intelligent instead of just being a very complex tool used to do things we already do faster. It allows for trying more permutations and more pattern recognition, but is just another approach to existing computer models that have also identified things we did not expect. Computer models used to identify starts with planets, but we don’t call those intelligent because they aren’t being sold as something they are not.
Ah, I see what you’re saying. Yes the recognition for these advances should be with human programmers and engineers who are configuring the software and making the models for testing. You’re right I can definitely see why that distinction is important and the media should be making clear that the AI isn’t just turned on and magically works it all out on its own. It’s computational resources being directed towards a task, the models it works within are setup by professionals and the discoveries it finds are interpreted and made useful by those professionals.
The media is just parroting what the companies that want to sell AI are saying. They suck at reporting anything technical or scientific for sure, but they didn’t come up with this on their own.
Your first comment my first thought was how does this have any upvotes. Thats super wrong
Top notch comback with this comment, i still cant agree with the original wording, i do recognize your point and agree with yoiur sentiment. Its a tool first and foremost.
That’s not true at all. AI can in fact generate novel techniques and solutions and has already done so in biotech and electrical engineering. I don’t think you understand how AI works or what it is
I think maybe people are running into a misunderstanding between LLMs and neural nets or machine kearning in general? AI has become too big of an umbrella term. We’ve been using NNs for a while now to produce entirely new ways to go about things. They can find bugs in games that humans can’t, been used to design new wind turbine blades (even made several asymmetrical ones which humans just don’t really do), or plot out entirely new ways of locomotion when given physical bodies. Machine learning is fascinating and can produce very unique results partly because it can be set up to not have existing design biases like humans do
And the nature of computers is that they are magnitudes better than humans at brute forcing. Machine learning can brute force (depending on the technique, it can be smarter than brute forcing, being more efficient) test many many many more designs and techniques than we could manually do. Sure it’ll fail many times, but it’s just a numbers game, and it can pump those numbers. It’ll try a lot of weird and unique stuff we wouldn’t even think to try, with varying degrees of success.
Name one that wasn’t just doing the thing it was told and the users being surprised. You know, the same way that people are surprised when research has results they did not expect using other approaches.
It’s a weird way of asking this. Of course it’s going to do what’s told, the alternative is that it, out of the blue, spits a battery design for no reason. If it were to somehow find a way to make batteries with less lithium in a way that never did before, isn’t that an unexpected result using other approaches?
This is not general artificial intelligence, everything we have is narrow AI, focused on solving one specific problem, for identifying birds to understand instructions between drugs.
That’s the point, it takes all the factors we know about and speed runs through all the possible ways it could work. Humans don’t have the time to look for every single possible way a battery could be constructed, but a ML model can just work it’s way through the issue faster and without human intervention.
Plus just like with the new group of antibiotics we just used AI to discover, it will allow truly thinking Humans to expand upon it.
Really sick of this “oh but you don’t realize AI don’t actually think! Therefore it’s all worthless!” With this smug bullshit like you think you’re bringing anything of value to the conversation.
I didn’t say it was worthless. In fact, I said the exact same things you just said in another post but with the additional detail that the name actually does matter when it is clearly misleading people into thinking it is something that it is not.
Also, AI would have just sped up an existing plan they had to try new approaches because AI doesn’t create new ideas or think of things out of nowhere.
If you tell AI to do things within a certain range and it gives you results then AI came up with a design as much as google came up with search results when you put something into the search bar.
It can apply existing concepts in ways we haven’t thought of. AI has been used for exactly this thing for years in chemistry. When given constraints (less lithium) and parameters (with this much capacity) it can try permutations of various designs that theoretically meet those conditions.
Yes AI is overhyped, yes it’s often exaggerated by news sources, but that doesn’t mean AI is a non-invention or something. It’s a long way off from any of the lofty goals that are often thrown around by tech ceos, but that doesn’t mean it’s useless.
We have had weather models, astronomical models, and all other kinds of computer based prediction methods that do multiple permutations that theoretically meet conditions. AI is just another step forward by doing better pattern recognition and identifying relationships with data based on design choices. All of the chemistry findings came from the system being designed to try things they would not normally test for because testing is expensive and AI can run simulated tests faster and cheaper.
My point is that saying ‘AI came up with’ is 100% inaccurate phrasing intended to trick people into thinking that AI is intelligent instead of just being a very complex tool used to do things we already do faster. It allows for trying more permutations and more pattern recognition, but is just another approach to existing computer models that have also identified things we did not expect. Computer models used to identify starts with planets, but we don’t call those intelligent because they aren’t being sold as something they are not.
Ah, I see what you’re saying. Yes the recognition for these advances should be with human programmers and engineers who are configuring the software and making the models for testing. You’re right I can definitely see why that distinction is important and the media should be making clear that the AI isn’t just turned on and magically works it all out on its own. It’s computational resources being directed towards a task, the models it works within are setup by professionals and the discoveries it finds are interpreted and made useful by those professionals.
The media is just parroting what the companies that want to sell AI are saying. They suck at reporting anything technical or scientific for sure, but they didn’t come up with this on their own.
Your first comment my first thought was how does this have any upvotes. Thats super wrong
Top notch comback with this comment, i still cant agree with the original wording, i do recognize your point and agree with yoiur sentiment. Its a tool first and foremost.
That’s not true at all. AI can in fact generate novel techniques and solutions and has already done so in biotech and electrical engineering. I don’t think you understand how AI works or what it is
I think maybe people are running into a misunderstanding between LLMs and neural nets or machine kearning in general? AI has become too big of an umbrella term. We’ve been using NNs for a while now to produce entirely new ways to go about things. They can find bugs in games that humans can’t, been used to design new wind turbine blades (even made several asymmetrical ones which humans just don’t really do), or plot out entirely new ways of locomotion when given physical bodies. Machine learning is fascinating and can produce very unique results partly because it can be set up to not have existing design biases like humans do
And the nature of computers is that they are magnitudes better than humans at brute forcing. Machine learning can brute force (depending on the technique, it can be smarter than brute forcing, being more efficient) test many many many more designs and techniques than we could manually do. Sure it’ll fail many times, but it’s just a numbers game, and it can pump those numbers. It’ll try a lot of weird and unique stuff we wouldn’t even think to try, with varying degrees of success.
Name one that wasn’t just doing the thing it was told and the users being surprised. You know, the same way that people are surprised when research has results they did not expect using other approaches.
It’s a weird way of asking this. Of course it’s going to do what’s told, the alternative is that it, out of the blue, spits a battery design for no reason. If it were to somehow find a way to make batteries with less lithium in a way that never did before, isn’t that an unexpected result using other approaches?
This is not general artificial intelligence, everything we have is narrow AI, focused on solving one specific problem, for identifying birds to understand instructions between drugs.
Yeah, that would be coming up with a battery design.
What novel solutions has ai done in electrical engineering?
That’s the point, it takes all the factors we know about and speed runs through all the possible ways it could work. Humans don’t have the time to look for every single possible way a battery could be constructed, but a ML model can just work it’s way through the issue faster and without human intervention.
Plus just like with the new group of antibiotics we just used AI to discover, it will allow truly thinking Humans to expand upon it.
Really sick of this “oh but you don’t realize AI don’t actually think! Therefore it’s all worthless!” With this smug bullshit like you think you’re bringing anything of value to the conversation.
I didn’t say it was worthless. In fact, I said the exact same things you just said in another post but with the additional detail that the name actually does matter when it is clearly misleading people into thinking it is something that it is not.
Not even close to true
Do you think AI just does things unprompted?
No one said anything about unprompted
😏
😳
Only a small subset of AI uses prompts.
Think of prompts as input
Deep learning, the most impressive type of AI, doesn’t use inputs.