"Normally the way that regulations work is very slow — very slow indeed. So usually, there'll be something — some new technology — that will cause damage or death. There will be an outcry. There will be an investigation. Years will pass. There will be some sort of insight committee. There will be rulemaking. Then there will be oversight — eventually regulations. This all take many years.... If you look at, say, automotive regulations: How long did it take for seatbelts... to be required?... This timeframe is not relevant to AI. You can't take 10 years from the point at which it is dangerous. It's too late...."
Said Elon Musk, talking to Joe Rogan, in September 2018 (Episode #1169, embedded below).
That part came right after a discussion of the way Google, Instagram, and Twitter have us "plugged in like the nodes on the network, like leaves on a big tree." Using these services, he said, we become "one giant cybernetic collective." We're "fueling this thing that's constantly around us all the time and it doesn't seem possible that people are going to pump the brakes."
He said we seem to be following "an instinct," as if we're "the ants that build the anthill." "It feels like we are the biological bootloader for AI." Because we're acting on instinct, the resulting AI is "our id writ large." It is a "projection of our limbic system" — all the "things that we like and hate and fear." This "combination of electronics and biology" is "a cyborg," "a sort of an organism."
At this point, he brings up that Instagram — with more images and video and consequently more engagement — has more "limbic resonance" than Twitter.
I listened to all of that to extrapolate what Musk intends to do with Twitter. We keep talking about how he wants to rid it of censorship and bias and institute freedom of speech. But that would be "our id writ large." And, at least back then, he sounded as though he was deeply worried about what we were creating and doing to ourselves and how terribly hard it is to regulate. The people who work at Twitter now are furiously regulating, but it's not the right kind of regulation. They're just more ants, projecting their limbic system. They don't see the larger phenomenon, as Musk does.
Maybe Musk just wants to be the consciousness of this "cyborg" while the rest of us are blithely behaving like instinct-driven ants. Maybe he's a benefactor who genuinely wants to figure out how to make AI develop in a way that is good for humanity and not dangerous.
३१ टिप्पण्या:
'Bootloader' is correct. He's referring to the simple program that launches first when a computer starts ('boots') up. The bootloader's job is to locate the operating system, and load it into the computer's execution area.
'Boatloader' would be the guy who makes sure the fishing gear and cooler get on board before you cast off.
“ more images and video and consequently more engagement — has more "limbic resonance" than Twitter.”
Isn’t limbic resonance the part of the human brain that is reptilian?
This issue of AI and the particular Joe Rogan podcast was recently discussed on the Fighter Pilot Podcast, which was discussing the militaries use of AI. Their guest on the subject explained that AI shouldn't be looked at as a thinking evolving machine as often portrayed in Sci-Fi. It is essentially statistical analysis but with the powers of computers. The key thing to know is the conclusions that AI comes to with that analysis often contain the inherent bias of the programmer.
The thought about AI is it gets better with more data to improve its statistics. Data isn't just words in a book or news article. It images and videos. How can we get more of those things? Surveillance is the best method, and it is used heavily not just in places like China but now in many European countries. With Meta, the people of Facebook will be able to track your likes and dislikes in a virtual world. They can literally put images in front of you to see your preferences, what attracts you, and what repels you. Notionally they can then tailor advertisement to you, but just look at what Facebook has done with the data so far. I can go on Facebook, and despite my wanting to not provide my political affinity, it will determine one for me. It is a small step from doing that to then policing people based on their AI determined affinity.
Some opponents to Twitter might think its AI algorithm is less a program than a person like Gadde. Consider that she doesn't have the time to sort through the millions of users to find those to ban. Now consider the AI helps a person identify the potential troublemakers early and allows them to be banned before they become well known. This is the real danger of AI.
@ Robert Marshall
Thanks. I'll remove my question. It's a completely new word to me.
"'Boatloader' would be the guy who makes sure the fishing gear and cooler get on board before you cast off."
Yes, it would be a metaphor, like ants making an anthill. He was doing metaphors. To my ear, "bootloader" sounds like someone who loads boots.
He's taken on the challenge of riding a wild mustang. We'll see what kind of cowboy he is. How many times can he stand being bucked off and then get back on again? Will he get the animal to submit via persistent kindness or consistent dominance of will with physicality (rules and regulations and banning naughty behavior)...I suspect there will be some combination of both if he tames the beast at all. I wish him success.
When I search the OED for "bootloader," I get
No dictionary entries found for ‘bootloader’.
Did you mean:
boat-header
boatload
bootlace
bootlegger
bootlicker
Very interesting. So far, I'm not actually watching the video. The typical podcast for me is fingernails on blackboard.
Something Musk touches on: vehicle safety. My understanding is that when motor vehicles were largely mechanical--hardly anything electronic on board at all--the regulatory state had to drag the manufacturers, kicking and screaming, to adopt safety features. Seat belts may indeed be a good example--an unusual example in that data could show lives actually being saved. Very few of the vehicle faults recited by Ralph Nader and others could be shown to have killed a significant number of people. The car manufacturers marketed a feeling of luxury inside your car; my son of all people says those old front bench seats, which didn't lend themselves to seat belts, were a fantastic way to travel.
Electronics have created a different world. Most things that can go wrong have something to do with software. As soon as manufacturers discovered that things like stability control and traction control, probably even ABS brakes, actually worked, they adopted them in new vehicles before the government could get around to a laborious approval process. The old Ford Pinto lawsuit (a fairly stupid mechanical design on a gas tank) taught the manufacturers to try to avoid such lawsuits, or limit the likely damages. My 8-year-old Kia gets recall notices on software: "your airbag may not deploy." The manufacturers have agreed to fix such problems as soon as they are identified, rather than wait for a fatality and ensuing lawsuits.
Cars are safer, but part of our job is to keep a computer in good working order, rather than to have the greatest possible driving experience. And of course they are heading toward self-driving cars. One big problem is like cruise control in a plane: if you spend less time than before actually controlling the vehicle, how does it make sense for you to suddenly take control in an emergency?
When it comes to speech, some people want to identify the speech they hate (bad hate vs. good hate) as a glitch that can be fixed by a computer. They don't want to engage it or think about it (or about versions of let's say "conservatism" that are not obviously hateful or evil), they just want to remove it by a click of a keyboard--or even more passively, gain some reassurance that someone else, like Nina Jankowicz, a Central Casting example of someone you don't want to be "Minister of Truth," to click the keyboard for us--save us from being disturbed.
I wonder if Elon Musk sees himself as the biological bootloader for a new operating system for Twitter - one attached to reason, not the limbic system. For instance, what if the operating system was trained to recognize known syllogistic errors (such as "Negative conclusion from affirmative premises")? - and to point to such errors in an some article or tweet on the Internet if a Twitter extension was switched by the user. A digital system system can recognize such errors because it is Boolean based - at least I think that is how it all began.
I think the 2022 world is a different place and Musk is responding to much simpler and more urgent needs: Twitter lost all sense of humor and balance between 2020 and 2022. The Tech Overlords decided that Trump must go on Jan 6, and killed Parler despite it having nothing to do with it (which protected its hegemony and deflected from the failing of Facebook and Twitter, etc.). Twitter then decided to go after the Babylon Bee, as a simple humor source. What about The Onion? The left also has no concern about doxxing its opponents now, ranging from the Canadian trucker supporters to Libs of TikTok.
Many on the left reveal absolutely no self-awareness, and thereby act on impulse in the fashion of a tiger, bear, or wolf.
AI isn't the most pressing issue today, the issue is that humans have lost whatever intelligence they had before social media rotted their brains. Too many knee-jerk reactions and self-contradictions. Keeping Twitter on the middle 80%, as Musk says he wants to do, might bring discourse back to pre 2015 standards. If we are lucky. Getting Twitter's activism under control would allow left-of-center people like Musk, Gabbard, Manchin, Sinema, Rogan, Pool, and Maher to lead conversations with right-of-center sane people. Those who've been silenced on right would actually be able to get messages out on Twitter too. .
Among the books that I read in the last decade that stayed with me the most is the book, "Our Final Invention: Artificial Intelligence and the End of the Human Era", by James Barrat (which you can by through the Althouse portal on Amazon, I'm sure).
It details how we simply won't be able to keep up with our own AI research and to think we will is simply naive. It (AI) never sleeps. It doesn't stop to binge-watch the next season of 'Ozark'. It is always working it's program, learning, growing in understanding. As we develop more evolved AI, it'll develop beyond the point of our programming and become 'smarter' than us. The Singularity? The question then becomes, when does it cease to need us? What does it do when it ceases to need us? Because, there is no emotional attachment there. No emotions or feelings to get in the way of AI's newly evolving priorities.
On a much smaller level, we plan things all the time. We have experts who help us understand things. Like our economy. War planning. The weather. Get where I'm going? These are things we've done a lot of planning for in our history. Yet regularly, once the 'on' switch is flicked, things go out of our control. Just the other day 'experts' were shocked that our GDP went negative for the first quarter. Last year was to be the worst hurricane season in history. Ukraine, Afghanistan, Iraq, were all planned war events. And on and on. So when I hear tech people tell me how AI is a tool for us and it'll never get out of control, I just chuckle and thank my Smart Refrigerator for allowing us another day.
Just sayin'. Enjoy those podcasts while you can. All the while, Skynet is smiling.
I think the right word is bootstrapper.
Don't you remember autoexec.bat from the good old DOS days, Althouse?
As I recall from that podcast Elon Musk decided if you can't stop AI he should join AI and thus neuralink was spawned so then it will be human beings behind AI plugged in so to speak. I'm sure Twitter is a treasure trove to mine the collective unconscious.
Resistance is futile...
When I search the OED for "bootloader," I get
No dictionary entries found for ‘bootloader’.
OED needs to learn to code.
"I think the right word is bootstrapper."
That image is of one person, wearing the boots, and able to lift himself by pulling on his own bootstraps. There's one person, notably doing it all for himself. So that's the polar opposite of the group effort, for the unconscious collective, that Musk is describing.
"Don't you remember autoexec.bat from the good old DOS days, Althouse?"
I never lived in those days. I have never used a computer that was not an Apple product. I've worked on a computer since 1985 and had a lot of different computers, but every single one was a Mac.
This all too hard to follow. My robot will read it and break it down for me, and then I'll get back to you.
Robert Marshall is correct. autoexec.bat has nothing to do with bootloading.
"…the way Google, Instagram, and Twitter have us "plugged in like the nodes on the network, like leaves on a big tree." Using these services, he said, we become "one giant cybernetic collective." We're "fueling this thing that's constantly around us all the time and it doesn't seem possible that people are going to pump the brakes.""
I have stayed away from the whole shebang. It has also seemed creepy and offers nothing I need.
There's actually an etymological connection. The reason the process of the computer starting itself up is called "booting" is it's a shortened form of "bootstrapping", which was chosen because of the phrase "pull oneself up by one's bootstraps". Accordingly, "bootloader" is itself an evolution of the older term "bootstrap loader".
My bootloader turned out to be a freeloader.
My boatloader turned out to be a freebooter.
I never lived in those days. I have never used a computer that was not an Apple product.
I lived in the days that Apple was an Apple II or II+ or C or E. I believe you when you say every single one was a Mac. But if you had an Apple before they had Macs, you'd remember having to tell the computer where to find the drives (usually a "floppy disk" drive) so that it could "boot" the contents of that drive and make them accessible to you. When computers, like a Mac, with Basic Input/Output System (BIOS) came out then you automatic access to the drive. The next step was having a way to automatically execute (autoexec) programs such as an graphical user interface (GUI) operating system (OS) so that people who didn't know machine language could operate the device. It is these innovations that gave Apple and Microsoft an opportunity to take over the market from those that preferred computers be left to those knowing machine language.
I've always thought "bootload" comes from the phrase "to pull yourself up by your bootstraps". There's an excellent short story by Heinlein "By His Bootstraps" about time travel loops in which an older man sets his younger self on a path leading to himself.
Original Mike said...
I've always thought "bootload" comes from the phrase "to pull yourself up by your bootstraps".
That's where the term comes from since the machine must load its operating system every time it is powered on and "pull its self up by its bootsraps".
Leland said...
It is these innovations that gave Apple and Microsoft an opportunity to take over the market from those that preferred computers be left to those knowing machine language.
Operating system commands, not machine language. Machine language is binary, I doubt there's anyone other than rhhardin who could regularly do much on a computer in machine language.
“ There's actually an etymological connection. ”
Good point. And I knew that but forgot when writing my 8:13 comment, which now stands as a reason to object to Musk’s original metaphor.
"so that it could "boot" the contents of that drive and make them accessible to you."
No. Here is how it is. Your computer, whether it be Apple or some other brand, has a memory chip that does not require power. But it's small. So, it holds a "boot program", which is just large enough to read the main operating system off a fixed location. Usually, the hard drive, but you can set it to use a floppy, a CD/DVD, or a USB. That's why we speak of "rebooting" the system.
To understand Musk, it helps to have had long, sophomorically speculative conversations with Computer Science nerds, preferably while stoned. He's got a headful of other people's ideas, and a pocketful of other people's money. He was really, really, REALLY worried that AI would destroy the entire Universe! Really! But he couldn't get anyone to listen! So, he's moved on.
He has been thinking about AI for quite some time. I read that the way he first met Grimes, who was latest partner, was he made a joke about Roko's Basilisk calling it Rococo's Basilisk. Apparently Roko's Basilisk is an idea that in the future, a hyper advanced AI will punish all humans that didn't actively work to bring about it's creation. And this will go so far as the AI bringing people back from the dead from generations past to punish them as well. The supposed rationale for the AI being that if the a game is created where for all time all humans who didn't work to create the AI are punished and this idea as a possibility exists for humans, then it becomes a rational choice to work for the creation of the AI because to not do so risks infinite punishment. He was changing Roko in that instance to Rococo which is apparently some sort of 18th century, baroque decoration style. When he looked into it saw that Grimes had already made that same joke several years earlier. He reached out to her and so started a relationship that would last for several years.
Lloyd W. Robertson said...
Something Musk touches on: vehicle safety. My understanding is that when motor vehicles were largely mechanical--hardly anything electronic on board at all--the regulatory state had to drag the manufacturers, kicking and screaming, to adopt safety features. Seat belts may indeed be a good example--an unusual example in that data could show lives actually being saved.
Seatbelt regulations killed people.. Because people look for their "level of safety comfort", and stay there.
Forcing people to wear seatbelts meant a number of them drove faster, and took more risks.
The problem being that instead of killing themselves because they weren't wearing a seatbelt when they got into an accident, they killed the poor schmuck they hit because they were driving too fast
https://fee.org/articles/are-seat-belts-making-you-less-safe/
It's called “The Peltzman Effect”
Thank you so much Ann for posting this. Not because he is a true Godhead, but because he is willing to, seemingly, be transparent as to what he is doing. When is the last time we saw Zuckerberg, Bezos, Theil, Dorsey, etc, sit and answer questions in extension with a sense of open effort? He is a treasure. But, and to his credit, he is humble enough to acknowledge that he is a human, and I think he knows all that that means.
Contrast Bezos:WaPo and Musk:Twitter.
टिप्पणी पोस्ट करा