Tech Explained: Here’s a simplified explanation of the latest technology update around Tech Explained: Op-Ed: Robots, AI, and an undefined future – A frame of reference that can’t keep up with itself in Simple Termsand what it means for users..
EngineAI founder Evan Yao says the China-based maker of humanoid robots is working with US tech titans such as Amazon and Meta on giving them AI brains – Copyright AFP Patrick T. Fallon
The idea of functional robots dates back to ancient Greece. An automatic maid called Automate Therapaenis was created in the 3rd century BC using hydraulics to mix drinks. Robots were also used as automatic players in theatrical performances.
There was a bit of a historical gap. Since then, things have become slightly more complex. The whole idea of robots was basically humanoid for a long time. That idea didn’t really change at all until very recently. Technology has taken a while to catch up, but it has. The humanoid robot is now just one species of robot.
If you define robots by function, you can see how far the technology has driven the thinking. Now, the technology is adding wings and a vast spectrum of new digressions to the thinking. It’s not helping the logic much.
Autonomous thinking robots as a working proposition in the real world can be said to have been mapped by Asimov’s I, Robot. At the time, it was a truly spectacular leap in logic and created the Three Laws of Robotics.
That was about as close to clear thinking as people got on the subject of living with robots for a long time. Now, it’s becoming an almost imponderable real-world problem. Human relationships with robots are very much a cultural issue, now becoming a social issue. It’s almost as if total incomprehension was a problem.
Also built into the theory of robots was human conflict with robots.
This was a sort of quasi-Luddite perspective. It wasn’t even theoretically practical in societies run by robots, like the world of the famous Magnus, Robot Fighter of the 1960s. The robots were the default bad guys. Magnus, raised by robots, is the hero for fighting the bad robots. Robot characters, like B9 in Lost in Space and Robbie the Robot from Forbidden Planet, slowly evolved. The idea of robots with individual personalities took a long time.
Dependence on robots was also seen as a bad idea. The threat was that humans wouldn’t be able to function without them. Automation in general was and is still seen as a weakness in human survival capabilities.
The social frame of reference for robotics is as critical as it is inevitable. It’s the thinking that’s not working. The ideas may define the technology, but the technology is now forcing the ideas to evolve, rapidly. Autonomous robots and AI are raising many old and new doubts.
AI is integral to autonomous robots and automation in general. The dysfunctionality of AI is becoming a serious, expensive, and by definition, dangerous problem. Now, create millions of robots with those problems. Brilliant, aren’t you?
The conflict with automation is already real enough. It stems entirely and exclusively from human thinking. “AI slop” is human-prompted slop. Human thinking isn’t keeping up with the technology at all.
Nor is business, on any level. You can’t expect a collection of evolutionarily deficient corporate slobs and sycophants to really grasp anything but money. That’s generating (pun intended) disasters regularly. You’re not “saving wages”. Productivity is all about costs and whether you can control them. You’re investing in a class of tech that will be obsolete in 5 years and paying for every second of it. Dumb is as dumb does.
Nor is this moronic superficial thinking keeping up with basic critical technical standards. They’re bleating about regulations that don’t yet exist, while doing nothing about obvious needs for proper controls and safeguards.
What use is a tool that doesn’t work properly?
More to the point, what use are idiot users?
These aren’t even difficult issues. If you define AI and robots by function, you have an instant, ready-made definition of necessary quality controls and technical standards.
Like:
Don’t crash financial markets.
Don’t kill the patients.
Don’t crash the power supply.
Don’t crash the food supply.
Don’t crash the water supply.
Don’t burn down the house, the neighborhood, or the planet.
Strict guidelines for human rights and privacy.
Proper oversight of all operations.
Reliable instant remediation and safeguards for all AI and robot operations as required.
A four-year-old child, presumably a very offended four-year-old child insulted to have such obvious things told to them, wouldn’t need these things explained. For some reason, this idiotic society does. The proper frame of reference needs to be taught in kindergarten. It’s how humans relate to technology that dictates what happens.
Trust nothing. Keep your mind open and your mouth wary.
