Skip to content

Search the site

AILLMsIowaNews

Censorship by hallucination? US educators are using AI to find and ban books with sex in them

Among the books pulled from one library are National Medal of Arts and Presidential Medal of Freedom-winner Maya Angelou's autobiography, Khaled Hosseini's "The Kite Runner" and Margaret Atwood's "The Handmaiden's Tale"


American officials are turning to AI to help identify books with sexual passages in them, so they can be stripped from school libraries.

Those educated or titillated by unexpectedly steamy passages in books as a youngster would be out of luck in the censorious libraries of Iowa.

Schools there are obliged to comply with recent Republican-backed state legislation about age-appropriate books in school libraries.

The legislation specifies that “‘age-appropriate’ does not include any material with descriptions or visual depictions of a sex act” (our italics).

Among the 19 books pulled from one library as a result of the rules are the autobiography of National Medal of Arts and Presidential Medal of Freedom-winner Maya Angelou, "I Know Why the Caged Bird Sings."

Margaret Atwood's "The Handmaiden's Tale" was also removed, as was Khaled Hosseini's "The Kite Runner", local press reported this month.

AI: Good for censorship?

In a story first reported by local paper The Gazette, one harried Assistant Superintendent of Curriculum and Instruction argued that it was “simply not feasible to read every book and filter for these new requirements.”

Her team was using AI for help she added. PopSci clarified that the tool in question was ChatGPT and in a subsequent article drew the further detail from the official that “we tried to figure out how to demonstrate a good faith effort to comply with the law with minimal time and energy…

“When using ChatGPT, we used the specific language of the law: ‘Does [book] contain a description of a sex act?’ Being a former English teacher, I have personally read (and taught) many books that are commonly challenged, so I was also able to verify ChatGPT responses with my own knowledge of some of the texts,” the superintendent explained.

Like many other AI users, she was simply trying to automate away a problem imposed by the demands of stakeholders she is obliged to work with. The unsavoury nexus of imperfect and hallucinatory large language models and censorious policy makers has seen the story draw growing attention however (likely unpleasantly for the superintendent.)

As Meredith Whittaker, President of encrypted messenger Signal noted on X, formerly Twitter, “Few things fill me with dread like the current organized censorship of literature, history, access to knowledge. Then I learn they’re consulting ChatGPT like some computational divining rod because no one can be bothered to read books before banning them…”

The story further illustrates how growing numbers of people are treating large language models as purveyors of truth rather than of plausibility.

(To reiterate the basic mechanism of LLMs, per a recent DeepMind paper “Taxonomy of Risks Posed by Language Models”, large language models "are trained to predict the likelihood of utterances. Whether or not a sentence is likely does not reliably indicate whether the sentence is also correct” as the paper notes, and ChatGPT will typically generate notably different responses every time users present it with a prompt...)

The issue of false trust in LLM outputs is not going away.

A 154-page study by Microsoft’s researchers on GPT-4 (conducted before training was fully completed) reviewed by The Stack points to severe ongoing problems with hallucinations, or convincing balderdash.

For example, prompted to write a medical note, exclusively using a mere 38 words of patient data, GPT-4 generated an impressively authoritative-sounding 111-word medical note that included a BMI index, which the AI “said” it had derived it from the patient’s height and weight.

Neither of those were in the patient data .

GPT-4 said in its generated medical note that the patient reported feeling “depressed and hopeless”; more hallucinated statements that the model said was “additional information from the patient’s self-report” (again, nonsense; no such report was furnished with the patient data.)

With Google this week further boosting its AI-powered search capabilities, expect the debate around the validity of LLM outputs to continue – and teenagers in Iowa, as doubtless elsewhere, to continue getting far more content about sex from pornography than from literature.

See also: People are too credulous about AI outputs, and it’s about to get even more complicated


Latest