June 11, 2018

"Researchers from the MIT Media Lab have trained an AI to be a psychopath by only exposing it to images of violence and death."

"It’s like a Skinner Box of horror for the AI, which the team has named 'Norman' after movie psychopath Norman Bates. Predictably, Norman is not a very well-adjusted AI. Norman started off with the same potential as any other neural network... The MIT team fed Norman a steady diet of data culled from gruesome subreddits that exist to share photos of death and destruction.... Norman was not corrupted to make any sort of point about human psychology on the internet — a neural network is a blank slate. It doesn’t have any innate desires like a human. What Norman does address is the danger that artificial intelligence can become dangerously biased.... The team now wants to see if it can fix Norman...."

Reports ExtremeTech.

ADDED: The Wikipedia article on the Skinner Box is called "Operant conditioning chamber." Excerpt:
Operant conditioning chambers have become common in a variety of research disciplines including behavioral pharmacology. The results of this experiments inform many disciplines outside of psychology, such as behavioral economics. An urban legend spread concerning Skinner putting his daughter through an experiment such as this, causing great controversy. His daughter later debunked this.... Skinner is noted to have said that he did not want to be an eponym.

98 comments:

Original Mike said...

Seems cruel.

rehajm said...

In an unrelated story Tesla's version 9 software update is coming in August with the first 'full self-driving features' according to Elon Musk.

Freeman Hunt said...

How is it anything like a Skinner box? How is the term "Skinner box" relevant to anything lacking innate desires?

Ann Althouse said...

Seems like a very crude experiment, especially to characterize the results as producing a "psychopath." The is MIT, but maybe ExtremeTech is spicing it up badly.

Henry said...

The debunking article is pretty interesting.

[T]here’s the story that after my father “let me out”, I became psychotic. Well, I didn’t. That I sued him in a court of law is also untrue. And, contrary to hearsay, I didn’t shoot myself in a bowling alley in Billings, Montana. I have never even been to Billings, Montana.

Ann Althouse said...

The machine only knew gory, murderous images and then was given a Rorschach test. How was it to see anything in the inkblots but the images it knew. You can't call that a lack of empathy. It's just pattern matching based on limited input.

Ignorance is Bliss said...

Predictably, Norman is not a very well-adjusted AI.

Norman is a perfectly well-adjusted AI. You just don't like what it has adjusted to.

Big Mike said...

The machine only knew gory, murderous images and then was given a Rorschach test. How was it to see anything in the inkblots but the images it knew. You can't call that a lack of empathy. It's just pattern matching based on limited input.

Having actually applied machine learning algorithms to big data prior to my retirement (and discovering that they were, mostly, the same pattern recognition algorithms I studied as a graduate student back in the 1970s), I was going to comment pretty much along the same lines. How did a retired law professor get so knowledgeable about machine learning?

Koot Katmandu said...

Hmm. Show me the AI. I do not believe it exists yet. Bunch a BS someone with an agenda.

Fernandinande said...
This comment has been removed by the author.
Fernandinande said...

Ann Althouse said...
The machine only knew gory, murderous images and then was given a Rorschach test.


Yes, it's a fake article, and no, it wasn't exposed to any images; that was a lie:

"Norman only got image captions from the subreddit that were matched to inkblots, and this is what formed the basis for his disturbing AI personality."

"Artificial intelligence researchers have thus far attempted to make well-rounded algorithms that can be helpful to humanity."

There's no "well rounded" about AI any applications so far, which are quite specialized - like "Norman":

"Norman is an AI that is trained to perform image captioning"; a popular deep learning method of generating a textual description of an image. We trained Norman on image captions from an infamous subreddit (the name is redacted due to its graphic content [and our dishonesty]) that is dedicated to document and observe the disturbing reality of death."

The only words 'n' phrases poor Norman knew were from captions of gross images.


Big deal.

Big Mike said...

Back in the 1970s there was an AI project at Stanford called PARRY, which attempted to emulate an individual with paranoid schizophrenia, allegedly to train clinical psychologists to recognize paranoia in patients, but also as an exercise in natural language processing (very much in its infancy forty-five years ago).

To quote Wiki: "The program implemented a crude model of the behavior of a person with paranoid schizophrenia based on concepts, conceptualizations, and beliefs (judgements about conceptualizations: accept, reject, neutral)."

But one curmudgeon wrote a program that, whatever the input, did nothing and suggested that his program emulated autism better than PARRY emulated paranoia.

Michael K said...

There has been concern for years about video games that resemble combat training for US military trainees.

Dave Grossman's book "on Killing," has been criticized for overemphasis but he has a point.

Target practice, like I had as a trainee a long time ago, was replaced by training on pop-up targets around the Vietnam era. The similarity of such instantaneous reaction time shooting to video games has been commented on ever since,

gspencer said...

"The team now wants to see if it can fix Norman...."

before it votes Democrat.

Wince said...

Norman holding up a Bic lighter said: "Play some Skinner!"

Fernandinande said...

Koot Katmandu said...
Show me the AI. I do not believe it exists yet.


It's quite new and so far pretty primitive - self driving cars, face and vehicle recognition, ~ statistical physics, "natural language". Plus actual two(or four)-legged robots that can walk around, negotiate obstacles, open doors, get up after being knocked over, etc.

Hsu here and here.

And beating humans at chess and "go" after a day of practice:

Ke Jie stated that "After humanity spent thousands of years improving our tactics, computers tell us that humans are completely wrong... I would go as far as to say not a single human has touched the edge of the truth of Go."

Bunch a BS someone with an agenda.

The article here was a bunch of clickbait BS.

traditionalguy said...

MK Ultra for robots. These Nazi super weapons never stop coming. Except the enemy these days seems to be the surpluss people targeted for termination because Gaia's world is too crowded.

gilbar said...

Seems like a very crude experiment, especially to characterize the results as producing a "psychopath."

well, i remember This One Time, the government built an AI to run the US missile command; and it Almost started Nuclear War: before it realized that, the only way to win is not to play.

And This Other Time, the government built (ANOTHER!) AI to run the US missile defense shield; and the AI decided that the REAL THREAT was humanity itself: and it built Terminator robots to Kill All Humans!

and don't even get me started on the time that the US built an AI (called "Colossus") that MERGED with a Russian AI (called "Guardian") to try to take over the world....
You Just Can NOT Trust AI. They're ALL Psychopaths !

Mike (MJB Wolf) said...

Once again Big Media feeds on Fake News instead of reporting what the hell is really going on in the world. Color me surprised:

Headline = misleading
Nut graf = overblown
Overall = FAKE

AI do not have the capacity to be psychopathic. They can only regurgitate words or phrases that can be "interpreted" by nerds with agendas to be "psychopathic" in content, not behavior which is the defining aspect of psychotic states. Behavior. Acts. Let's face it, computers intrinsically lack emotion and so lack one of the defining features of psychotic breaks, detachment. Computers are always "detached" because they have no common emotions to bond with humans.

Kind of like the reporters who write this crap and sell it to the rubes who only hear a headline.

gilbar said...

Mike agrees with me: Computers are always "detached" because they have no common emotions to bond with humans.


SEE? You Just Can NOT Trust AI. They're ALL Psychopaths !

madAsHell said...

I'm thinking this is the wrong side of the Turing machine.

Henry said...

@Fernandistan -- Here's the counter argument:

Particularly AlphaGo Zero and slightly more general AlphaZero take ridiculous amount of compute, but are not applicable in the real world applications because much of that compute is needed to simulate and generate the data these data hungry models need. OK, so we can now train AlexNet in minutes rather than days, but can we train a 1000x bigger AlexNet in days and get qualitatively better results? Apparently not...

From the comments to that article:


Cooper Nelson · Works at UC San Diego
I've been closely following AI research since the early 1990's.

The big "win" has been commodidty PC hardware (CPU/GPU), which in turn enabled broadband, big data and cheap commodity compute clusters that are a billion times more powerful than what was available in the 1980's.

On the software front, other than solving the XOR problem via backpropagation there really haven't been any big breaktrhoughs. Googles AlphaGo is a hybrid system using an ANN tuned monte-carlo tree search; which was a known approach to playing Go. Google was just the first organization willing to drop a dime on a cluster big enough to handle a full sized board.

Anyway, I'll give you partial credit and suggest that an AI 'autumn' is approaching, simply due to the technology being oversold. The fundamentals in the big data space are and will remain strong, where AI is solving real problems.

In the self-driving car space, Waymo is doing something similar to the AlphaGo approach. They have a traditional expert system autopilot that is being tuned via ML to improve performance. A full ML approach would never be allowed on the roads, as it has to be provably deterministic in order for it pass existing regulations.

exhelodrvr1 said...

Next they will train one as an LLR.

Ignorance is Bliss said...

Henry quoted...

A full ML approach would never be allowed on the roads, as it has to be provably deterministic in order for it pass existing regulations.

Never is a long time. If a big company ( that possibly rhymes with sploogle ) were to develop a self-driving car that was statistically significantly safer than a human driver, regulations would be changed. ( A few palms might get greased in the process. )

And in that case, those regulations should get changed. Human drivers are also not provably deterministic.

Larry J said...

Michael K said...

There has been concern for years about video games that resemble combat training for US military trainees.

Dave Grossman's book "on Killing," has been criticized for overemphasis but he has a point.

Target practice, like I had as a trainee a long time ago, was replaced by training on pop-up targets around the Vietnam era. The similarity of such instantaneous reaction time shooting to video games has been commented on ever since,


I went through basic and infantry training back in 1975, so we probably had the same kind of weapons training. There were studies after WWII that revealed only a small percentage of soldiers fired their weapons during a firefight. The reasons given were partly based on morality (thou shall not kill) and partly the belief that their firing a semiautomatic rifle like the M-1 wouldn't contribute much to the firefight.

After the war, they changed things. Targets were changed from rectangles to human silhouette shapes (size depending on the range) to condition soldiers to shoot at human shaped objects. In addition, weapons went from semiautomatic to selective fire with full automatic as an option. By Vietnam, something like 90% of soldiers fired their weapons during a firefight. Of course, they did a lot of "spray and pray" shooting, so they fired thousands of rounds for every enemy killed. Snipers averaged 1.3 rounds per kill.

Loren W Laurent said...

Show the AI only 4chan and porn and see what the result is.

Most likely, the successful passage of the Uncanny Valley.

-LWL

Bay Area Guy said...

My preference would be for these MIT engineering geeks to focus more on the sex robots, than then violent killer robots.

Of course, if the MIT engineering geeks would direct the killer robots to impose law and order in the most violent ghettos of America (inner cities of Chicago, Detroit, Baltimore), I could see the utility. However, then Black Lives Matter will protest the robot industry, and start screaming "robot brutality!" and then things will get very heated -- legally speaking -- for our cyborg community.

I foresee problems.

Birkel said...

Does this make Westworld a documentary?

Drago said...

The alternative name, which lost out to "Norman", was "Chuck", which the team felt would properly represent a "lifetime Software Development Lifecycle Artificial Intelligence".

Oso Negro said...

How can operant conditioning work without the ability to reward or punish? The machine lacks desire. It doesn't feel pain, or experience pleasure.

robother said...

EDH: Norman holding up a Bic lighter said: "Play some Skinner!"

Damn, EDH, that one almost went over my head. Perfect.

Ignorance is Bliss said...

Oso Negro said...

How can operant conditioning work without the ability to reward or punish? The machine lacks desire. It doesn't feel pain, or experience pleasure.

The machine lacks desire, but it can be "compelled" to attempt to maximize some "reward" value ( or alternatively, to minimize some "cost" value. ) This is known as reinforcement learning, and is a major sub-category of machine learning.

Bay Area Guy said...

If I had to choose between, say, creating a sex robot industry to service our frustrated InCel community or a violent killer robot industry to reduce crime in the inner city, I would find myself in a real pickle. Both have their advantages and drawbacks.

I once studied Economics. I was told that allocation of scarce resources in the face of unlimited wants was a central feature of this dismal science.

Could we make the violent killer robots look like Scarlet Johansson? That might kill two birds with one stone.

Edmund said...

If they cure Norman, then we can cure psychopaths!

Roughcoat said...
This comment has been removed by the author.
Roughcoat said...

There were studies after WWII that revealed only a small percentage of soldiers fired their weapons during a firefight.

There was only one major study, actually, and it wasn't really a study but rather what would better be characterized as a piece of sensationalist journalism by U.S. Army historian S.L.A. "Slam" Marshall. His "report" is called "Men Against Fire: The Problem of Battle Command," and you can download it from various Internet sites or purchase it for Amazon Kindle for 9.99. It is virtually a work of fiction. Marshall's research was deeply flawed and his conclusions are the product of sloppy scholarship, dishonesty, and extreme confirmation bias. Scholars have since thoroughly debunked the book and refuted his analysis and conclusions, but the notion that most men in front-line infantry units did not fire their weapons in combat has a acquired a sort of historiographical momentum, a life of its own. It is the "un-fact" that refuses to die.

Bay Area Guy said...

Are they trying to cure psychopaths with this AI nonsense, er, sorry, AI research or are they trying to create cyborg psychopaths?

As an aside, Frankenstein's monster, should NOT be called Frankenstein -- but we all do that. Frankenstein was the doctor (the cause), while the monster was the effect.

Roughcoat said...
This comment has been removed by the author.
Bay Area Guy said...

"There is no such thing as "artificial intelligence. Machines cannot possess human intelligence because they are not human."

Tell that to Al Gore.

Roughcoat said...

There is no such thing as "artificial intelligence." Machines cannot possess human intelligence because they are not human. They can simulate human intelligence. And, anyway, the term is an oxymoron. If there is "intelligence" it is not artificial. It is what it is, if such a thing were possible, which it is not.

Roughcoat said...

Al Gore only simulates intelligence.

PM said...

Violent images don't create Droogs, they cure them.

Leland said...

It's just pattern matching based on limited input.

Very astute professor. Pattern matching is all what modern AI seems to be, and I don't find it nearly as impressive as the IT guys think it is.

Basically they encoded the AI with what a psychopath is and then got excited that it could repeat what it was taught. Odd considering psychopath is defined as being antisocial and lack of empathy. How exactly could a computer have empathy for a human even with training, and what way could it ever get empathy if never provided that training? It is not like the AI came built with genetic instincts. Have you seen a new computer imprint a person as "mother"/"nurturer"?

These same people sell these "AI" solutions to handle customer service calls, and suggest most people wouldn't even know they are talking to a computer and not a person. Frankly, I think every time I come across these AI system, I'm dealing with a psychopath.

rhhardin said...

Know your psychopaths.

Country Time lemonade legal aid
https://www.youtube.com/watch?v=kocQvvKoyg4

JAORE said...


Could we make the violent killer robots look like Scarlet Johansson? That might kill two birds with one stone.

It might lower homicides, but it would decimate the number of rapists.

n.n said...

Add a strictly moral character, and it will self abort.

rhhardin said...

Here's how to think of computers.

Imagine it's following a script. If it gets input X then answer Y, else if it gets input Z then answer B, else if it gets input C then answer D, ...

covering every possibility for the script and its continuation.

Obviously nothing is happening. It's following a script, just looking up the answer.

AI is just a more efficient way to code the same instructions, covering many cases at once.

Nothing more is going on.

bagoh20 said...

Did it dress up like an Apple II?

James K said...

Nothing more is going on.

Yes, this exercise seems to take "Garbage in, garbage out" (GIGO) to a higher level.

Virgil Hilts said...

We laugh, but the Singularity is a legitimate theory behind the Fermi Paradox (why we have not encountered intelligent life despite the math suggesting we should've). Imagine Elon Musk as a mad scientist bent on destroying humans through programmed robots/nanobots.
As a reader of Sci Fi since the 60s it's been fascinating to watch the growing consensus that I have no mouth but I must Scream may be a more probable future than City on the Edge of Forever.
https://wjccschools.org/wp-content/uploads/sites/2/2016/01/I-Have-No-Mouth-But-I-Must-Scream-by-Harlan-Ellison.pdf

Fernandinande said...

James K said...
Yes, this exercise seems to take "Garbage in, garbage out" (GIGO) to a higher level.


That was actually the (pretty trivial) point, although obfuscated by the horrible article.

The software was given only descriptions of gross pictures (but didn't actually "see" any gross pictures), so it gave "gross" descriptions to non-gross pictures, namely ink blots.

Not exactly earth shaking, and it's sort of related to -

"One widely used facial-recognition data set was estimated to be more than 75 percent male and more than 80 percent white, according to another research study."

Fernandinande said...

Oops, I forgot to virtue signal: white men are bad because they use their own pictures to initially train the software they're writing.

Ignorance is Bliss said...

rhhardin said...

Nothing more is going on.

Do we have any reason to believe that anything more is going on for any of us?

Fred Drinkwater said...

Back in the 80s i helped engineer a new computer (ROLM Hawk, for those as old as me and as niche-y as me).
State of the art, for its domain.
This machine basically did not know how to multiply numbers. How,you ask, did it multiply numbers?
Crude and rude. It was preprogrammed with the solution to most multiplication problems, and a clever algorithm to refine the answer if it did not know the exact solution a priori.
Computers don't work like people. Analogizing between humans and computer behavior is usually deeply misleading.

DougWeber said...

Classic very bad reporting. If you did the same test but the blots were associated with nothing but kitten photos, the AI net will be trained to associate every inkblot with kittens. Does not mean the AI net is a nice net.

The story does describe the conclusion but at the end. The researchers were demonstrating that the input you train the AI net with determines what it associated with.

Howard said...

Roughcoat: What was your personal experience going through either Army Basic Training or Marine Corp Boot Camp? Did you experience a psychological transformation? Did you subsequently experience vivid combat dreams of throat cutting, ambushes and running out of ammunition even without any actual combat experience. Do you still decades later have to suppress murderous rage when facing assholes and bullies? It is a fact that US military indoctrination changed after Marshall published his findings, so even if his research is flawed, the military services took it seriously enough to do something in response.

gilbar said...

in Robert Heinlein's book Friday
his character considers who she'd rather have flying a plane,

A) a computer, that is super smart; and has NO vested interest in the plane's survival
B) a human, on board, that would Very Much like the plane to NOT Crash

She decides that a human, aided by a computer would be most best. You could (theoretically) program the computer to want a safe landing; but if one's not in the cards: Oh Well!

A human, on the other hand might very well look outside the box for a way home.
Of Course, this doesn't change the fact that Most plane crashes are caused by pilot error. Still, when you get into you driverless car; and tell it to Step On It, 'cause your wife is having a baby (or, you have a hot date): don't expect the computer to care.

Unknown said...

bagoh20 (6/11/18, 11:13 AM), it not only dressed up like an Apple II, it self identified as an Apple II

rhhardin said...

"Nothing more is going on."

Do we have any reason to believe that anything more is going on for any of us?


Well yes. The classical philosophical problem. No consciousness.

Matter has no inwards.

Ignorance is Bliss said...

gilbar said...

in Robert Heinlein's book Friday
his character considers who she'd rather have flying a plane,

A) a computer, that is super smart; and has NO vested interest in the plane's survival
B) a human, on board, that would Very Much like the plane to NOT Crash


The passengers and crew of Germanwings Flight 9525 could not be reached for comment...

Ignorance is Bliss said...

rhhardin said...

Well yes.

I wonder why your script responded that way. Probably a programming error.

The classical philosophical problem.

i.e. Bullshit


No consciousness. Matter has no inwards.

And you know this how?

rhhardin said...

"No consciousness. Matter has no inwards."

And you know this how?


You remove one surface only to meet with another surface. Still matter.

Marcus Carman said...

Ho Hum. More fake news.

Ignorance is Bliss said...

rhhardin said...

You remove one surface only to meet with another surface. Still matter.

Maybe you are slicing across the wrong dimension.

Henry said...

Regarding Norman, a somewhat longer article on The Verge explains more about the experiment:

The point of the experiment was to show how easy it is to bias any artificial intelligence if you train it on biased data....

This seems to be so blindingly obvious as to not merit any interest at all.

n.n said...

Planned Parenthood, Gosnell, Mengele documentaries or promotions?

Roughcoat said...
This comment has been removed by the author.
Roughcoat said...

Howard @12:03 PM:

I'm sensing hostility toward me for posting verified information about the writing of
"Men Against Fire."

Get mad at the facts, if you will, not me.

Re: "It is a fact that US military indoctrination changed after Marshall published his findings, so even if his research is flawed, the military services took it seriously enough to do something in response."

The military services took his findings seriously and implemented changes based thereon because at the time they did so they believed that Marshall's analysis was accurate and correct. Marshall enjoyed tremendous respect, and rightly so because he was a gifted writer and an excellent historian, albeit only when he wanted to be. It was only much later that "Men Against Fire" was properly scrutinized and its flaws revealed. Whether the changes the military services implemented were nonetheless useful in preparing soldiers for combat is a different issue.

exhelodrvr1 said...

IIRC, the report was that only 10% of men in combat fired their weapons. That would be one person per squad. Does that sound realistic to you?

Roughcoat said...

Does that sound realistic to you?

It is not realistic and, understood in broad terms, it is (was) not true. And the more experienced the unit, the less true. IOW, combat veterans understood (having learned through experiene) that in battle the more fire they put out the more likely they were to survive and prevail in the engagement.

If they did not behave in such fashion the concept of achieving fire superiority, fundamental to warfare at the tactical level, would have no validity.

But it does have validity.

This is the traditional raison d'etre for the existence of of non-coms (apart allocating latrine cleanup duties and suchlike): to stride up and down the line exhorting their charges in the most ferocious terms imaginable in the loudest voice possible to "fire your weapons, maggots!" and to take names and kick ass of those who don't.

Roughcoat said...

I've always suspected that many in the military training branches knew that Marshall's thesis was problematic but didn't care because it served a useful function, i.e. to focus ever more intensively on the need to train 11-bravos and their jarhead equivalents to put out lots and lots of fire in the enemy's direction. They knew it was a myth, albeit with a slight basis in fact, and chose to perpetuate it because it was a very useful myth.

great Unknown said...

Now take two machines, and expose one to DNC sites, the other to RNC sites. After a suitable interval, "psychoanalyze" the machines.

Michael K said...

"I went through basic and infantry training back in 1975, so we probably had the same kind of weapons training."

I went through in 1959, after Korea but well before VN. We still did target shooting like the descriptions in WEB Griffin's novels.

I think they were starting to use the popup type targets by 1975, weren't they?

The emphasis was on delivering fire volume rather than accuracy, I think. Also, it was close range. I don't think they were doing 500 yard timed fire by then, were they ?

Michael K said...


Blogger Roughcoat said...
I've always suspected that many in the military training branches knew that Marshall's thesis was problematic


I've also read that 25% of fighter pilots got 90% of the kills, or some such number. That was pre-missile, of course.

The Top Gun school, which two of my friends went through, was based on he trouble the F4 had in dogfights. I had no guns and relied on missiles.

WWII dogfights required skills like bird shooting. One advantage of the P 38 was that the guns were all centered. The other fighters had guns that were aimed at a spot 110 yards or so in front.

Drago said...

Michael K: "The Top Gun school, which two of my friends went through, was based on he trouble the F4 had in dogfights."

Every aircraft type has advantages and disadvantages.

It's up to the pilot to maximize the advantage and minimize the disadvantages in order to achieve victory in the air.

Top Gun was established because US pilots proficiency in the art of Air Combat had atrophied due to over-reliance on missiles.

It was a pilot training and experience problem, not so much an aircraft type issue.

Of course, you put a skilled and experienced pilot in a superior aircraft and you've got something.

The Israeli's certainly understand this and that is why their pilot training pipeline identifies the future fighter studs at such an early age and they get them into the cockpit as fast as possible, put them through world-class training with world class pilot instructors and give them outstanding aircraft optimized for their operating environments (airframes, avionics and weapons systems).

Now, an interesting aside here: By the 1980's the US pilots (Air Force and Navy) had fully recovered their dogfighting skills.

So much so that on more than 1 occassion our Israeli friends were less than thrilled by the prospects of engaging US pilots in training exercises and cancelled out so as not to introduce potential loss of confidence on the part of their pilots. Israeli pilots truly believe they are invincible...no point in giving them any reason to doubt that, eh?

Howard said...

Roughcoat: You don't seem to have anything but self-selected second-hand information and idle speculation. Full Metal Jacket was close, but didn't go into all of the psycho manipulations that is also successfully used by cults for indoctrination like EST, Scientology and other new-age cults like the sweat-lodge guy. Also the USMC focused more on accuracy than volume when I was in during the early 80's. Our Vietnam Vet DI's told us horror stories how over-use of full auto M-16's wasted a lot of ammo and Marines. We were trained to be able to consistently blow a head off at 200m off-hand and nail a body at 500m prone. William Munny (Clint) in Unforgiven describes pretty close what we were trained to do. This is quite different from the suppressing fire training, which the graduation ceremony was done after dark with tracers. Even that was done in a measured, deliberate manner to achieve a sweet-spot of accuracy and volume.

Michael K said...

Our Vietnam Vet DI's told us horror stories how over-use of full auto M-16's wasted a lot of ammo and Marines.

Also, the early M 16s would jam because the AR 15 used a different powder and no cleaning kit was supplied with the M 16.

The Army has always been worried about ammunition use. The 1860 army did not have repeating rifles because of concerns about "wasting " ammo.

Some units took up collections and bought Henry rifles.

Howard said...

Doc: Didn't know about the powder issue and no cleaning kits, thanks. Maybe that's why we spent so much time field stripping and running patches in boot camp.

James K said...

“Now take two machines, and expose one to DNC sites, the other to RNC sites. After a suitable interval, ‘psychoanalyze’ the machines.”

Ha, even better, first expose the machines to basic rules of logic and rational choice.

Michael K said...

Didn't know about the powder issue and no cleaning kits

I used to have a book on the history of US military rifles. I think it was titled "Cease Fire" but I can't find it now. It had the story of the muzzle loaders in the Union Army and the story of the M 16.

Michael K said...

By the 1980's the US pilots (Air Force and Navy) had fully recovered their dogfighting skills.

One of my friends was the first Marine Top Gun instructor. They all loved the F 4 but it had no guns.

Flying into combat without a shooting iron was another matter. “That was the biggest mistake on the F-4,” says Chesire. “Bullets are cheap and tend to go where you aim them. I needed a gun, and I really wished I had one.”

One of my friends, whose call sign was "Fokker," did over 490 missions in Vietnam, almost all air to mud.

gilbar said...

They all loved the F 4 but it had no guns.
in fairness to McDonnell; when they built the f-4 no one had Any idea that america would be so stupid as to let the enemy come into gun range.
But in vietnam, ROE said that you had to visually id a bogie before engaging . That means you're now in gun range in a plane without guns. Boo!
The Sparrow missile should have been fired about 4 miles before visual contact
{the Sparrow (and the f-4) were supposed to be used to intercept bombers coming in to hit the carrier, not dog fighting fighters at all .

And, of course, the Air Force's plan was to launch Hound Dog nuclear missiles at the fighter bases from several hundred miles away, before your bombers got to them.

Hammond X. Gritzkofe said...

I think Robert DeNiro has been watching too many Robert DeNiro movies. All that violence and death turn anybody into a psychopath.

Big Mike said...

Flying into combat without a shooting iron was another matter. “That was the biggest mistake on the F-4,” says Chesire. “Bullets are cheap and tend to go where you aim them. I needed a gun, and I really wished I had one.”

I have read that under the rules of engagement the American fighter pilots had to close to distances that were barely inside the envelope for the missiles they carried, which required the targets to be at least a certain distance from the launching aircraft to arm its warhead. Seems reasonably likely.

Michael K said...

It's a bit of a wonder that they could not retrofit a gun pod.

Probably aerodynamics. They did use a nose job for recce version.

gilbar said...

They DID put in a gun pod on the F-4s!
Although by 1965 USAF F-4Cs began carrying SUU-16
external gunpods




Hammond X. Gritzkofe said...

Yep, 366 Tac Fighter Wing at Danang. Renamed the Wing "Gunfighters." Patch was the phantom character with a gatling gun pod under one arm. Was before I got there, though. All that action was in R.P. IV. My tour, the Wing was not going that far north.

Michael K said...

If I see Manfred again, I will ask him about gun pods. MY other friend who flew F 4s did not fly in VN.

Roughcoat said...

So, Howard, what's your beef?

The Cracker Emcee Refulgent said...

Dr. K,

The book is "Misfire". Available through the Althouse Amazon portal, natch.

tim in vermont said...

Personally, I think that Google is a psychopath.

todd galle said...

I had thought that SLA Marshall's work was pretty much understood to be very questionable. Not trusting my memory I did a quick search, and found an article by John Whiteclay Chambers II, published in the Autumn 2003 editon of 'Parameters' put out down the road by the Army War College in Carlisle. One of his conclusions regarding Marshall's findings reads that they, "...appear[s] to have been based at best on chance rather than sampling and at worst on sheer speculation." Marshall kept minimal notes from his after action interviews, and was more in a reporter mode than historian. I just finished "All the Way to Berlin" by James Megellas, who was an 82nd Div Lt and platoon leader during WWII. He saw a lot of action, and makes no mention of his troopers not engaging that I can recall. I do remember an anecdote regarding a Penna Civil War infantryman, who today would be considered a Conscience Objector, who during a battle would fire his rifle at a 45 degree angle toward tree tops and such, telling his file mates that while he couldn't kill the enemy, he might just scare them. They appear to have let him be, as he was taking the same risks at them and staying in his place in the line of battle.

Seeing Red said...

This will end well.

Michael K said...


Blogger The Cracker Emcee Rampant said...
Dr. K,

The book is "Misfire". Available through the Althouse Amazon portal, natch.


Thanks. I don't know what happened to my copy.

Roughcoat said...

Todd Galle @7:33 PM:

Yep. As I said.

Thanks for the post.

Michael K said...

They appear to have let him be, as he was taking the same risks at them and staying in his place in the line of battle.

I think one of the matters in SLA Marshall's writing, be that as it may, was that he said men who did not fire would not run or hide but helped those who were firing to load.

They did not refuse to accept risk.

Michael K said...

I just heard back from a friend who flew F4s but not in VN. He flew F4s with centerline gun pods.

Hi Mike, I flew F-4Js and S models. No internal gun but we could put a gun pod on the centerline.

Jeff said...

What was your personal experience going through either Army Basic Training or Marine Corp Boot Camp? Did you experience a psychological transformation? Did you subsequently experience vivid combat dreams of throat cutting, ambushes and running out of ammunition even without any actual combat experience. Do you still decades later have to suppress murderous rage when facing assholes and bullies?

Army Basic Training starting in February 1975 at Fort Leonard Wood in Missouri. It was frickin' cold. The night we arrived (from Little Rock) I had on only a light fake leather coat and we spent several hours waiting around in the cold wind. I got pneumonia from it and spent my first two weeks of military life in the hospital. Then I started Basic.

I can't say that I experienced any particularly profound transformations, psychological or otherwise. Nor did I observe any in my fellow trainees. This idea that training is going to fundamentally change your personality or cause some kind of psychological trauma does not fit my experience.