Half a century ago, a philosopher imagined a world where we could fulfil our desires through an 'experience machine' like the Matrix. He argued we'd prefer reality, but was he right?
Finally, Nozick supposed that “plugging into an experience machine limits us to a man-made reality, to a world no deeper or more important than that which people can construct”.
I find myself agreeing with this, particularly after a lot of time spent in such man-made realities whether in the form of books, movies, or games. At some point, some element of these I think will speak to people and inspire them to pursue on their own in some way, whether a hobby of a character, or creation of their own media or sub-field. An action that cannot be anticipated and generated by any such experience machine or simulation in a way that adequately satisfies someone.
Their AI partner, they explain, “has been treating me like no other person has ever treated me”.
This is an aside, but this jumps out at me as an interesting tell…Given that these AI partners aren’t necessarily sophisticated enough to fully emulate people and there’s historical precedent for people seeing in them some traits they want to see, I wonder if this could be viewed as an angle to developing AI-intermediary therapy whereby one may learn that it isn’t the AI treating them well, but the patient themselves.
The AI may be serving as a method to direct their inner monologue into patterns of thought that are kinder and more uplifting compared to however they may otherwise be.
“As we get more familiar with technology and especially virtual technology, we are going to care less and less that something is virtual rather than non-virtual,” Weijers notes.
Frankly, I think we’ve already been here for some time. The technological element undeniably alters matters, but society itself has arguably been in this situation for as long as people have been capable of abstract thought. People have always existed between knowledge and ignorance, amidst facts and fabrications, and indulged themselves as much and often more in fabrications as facts.
What has consistently been of more concern is how much they draw from their indulgence rather than lose, rather than the ontology of it.
I find myself agreeing with this, particularly after a lot of time spent in such man-made realities whether in the form of books, movies, or games. At some point, some element of these I think will speak to people and inspire them to pursue on their own in some way, whether a hobby of a character, or creation of their own media or sub-field. An action that cannot be anticipated and generated by any such experience machine or simulation in a way that adequately satisfies someone.
This is an aside, but this jumps out at me as an interesting tell…Given that these AI partners aren’t necessarily sophisticated enough to fully emulate people and there’s historical precedent for people seeing in them some traits they want to see, I wonder if this could be viewed as an angle to developing AI-intermediary therapy whereby one may learn that it isn’t the AI treating them well, but the patient themselves.
The AI may be serving as a method to direct their inner monologue into patterns of thought that are kinder and more uplifting compared to however they may otherwise be.
Frankly, I think we’ve already been here for some time. The technological element undeniably alters matters, but society itself has arguably been in this situation for as long as people have been capable of abstract thought. People have always existed between knowledge and ignorance, amidst facts and fabrications, and indulged themselves as much and often more in fabrications as facts.
What has consistently been of more concern is how much they draw from their indulgence rather than lose, rather than the ontology of it.