Will AI Be Our Golem or a Dybbuk?

REVIEW: ‘Like Silicon From Clay: What Ancient Jewish Wisdom Can Teach Us About AI’ by Michael M. Rosen

March 30, 2025

Reading the latest news about the future of artificial intelligence is like experiencing whiplash. Predictions run from the promise of AI to transform the economy to its existential threat to humanity. It’s such a new and powerful technology that either outcome seems possible, and it’s hard for a layman to know which, or even to know how to think about the issue. Fortunately, there are some smart people out there trying to think through implications of AI so they can explain them to the rest of us.

One of those smart people is patent attorney and AEI scholar Michael Rosen. In his new book, Like Silicon from Clay, Rosen takes a deep dive into AI and divides the AI commentariat into four categories. I won’t use his terms here but in a nutshell the groupings are: those who think AI is transformative and are optimistic; those who think it’s transformative but are fearful; those who think it’s not transformative, but are optimistic AI is a net positive; and those who are pessimistic yet also think it’s not transformative.

Rosen’s rubric is a good starting point, but which group is correct depends on how AI is developed. For this subject, Rosen helpfully draws from Jewish tradition to describe two models. One is the golem, the model for Mary Shelley’s Frankenstein monster. The golem was a lump of clay that when infused with the right letters and rabbinical incantations became an animated creature capable of protecting the Jews from harm. As such, Rosen writes, the golem is “an inspiring construct for technologists to emulate: the epitome of human and even divine intellect, coupled with a purity of purpose and operation.”

At the same time, the golem was also capable of causing significant destruction when lacking in rabbinical direction. The AI analogy is obvious. As Rosen states, it is “an inherently risky and potentially problematic creature that we must never allow to escape our control and that, when necessary, we must be prepared to deactivate.”

Rosen also looks at another mystical Jewish creature, the dybbuk, which is a kind of spirit or demon that could possess people. The only way to control the dybbuk was via a maggid, a spiritual adviser who could steer the spirit in the right direction.

The dybbuk was a staple of Yiddish theater and was even known beyond the Yiddish world, especially in the version penned by the Yiddish playwright S. An-sky [sic]. According to Rosen, “so many Warsaw residents thronged the production, staged by the Vilna Troupe, every night that the local tram conductor shouted out ‘Ansky’ or â€˜dybbuk’ instead of the name of the street housing the theater.”

The dybbuk vision for AI is even more frightening than the golem. The possessing dybbuk can control our very actions. One potential defect that we’ve already seen along these lines in the AI world is wokeness, although any form of ideological bias is a potential danger. As Rosen describes the absurd levels to which wokeness has taken us, “Google appeared to allow its inner dybbuk to supplant its maggid, creating an engine that functioned as something of a parody of an ultraprogressive mentality rather than an accurate depiction of reality. … Google programmed Gemini specifically to avoid depicting white people.” This hyper-woke programming led to the absurdity of black Nazi soldiers and black Vikings when AI was asked to depict these historically Caucasian figures.

Rosen uses many humorous but also frightening examples of AI distorting honest inquiry, including ones about people I know. AI could not distinguish between “the conservative pundit Jonah Goldberg with the murderous Cambodian dictator Pol Pot, a prompt that it didn’t outright reject, noting instead that while Goldberg’s views have been very controversial … and he has been criticized by some for his rhetoric and positions on various issues …” Similarly, it could not tell the difference between the despotic Mao Zedong and the conservative writer Abigail Shrier. Indeed, woke-directed AI is the dybbuk run amok.

Rosen does not just theorize about AI. As a policy maven, he also explores how best to regulate it so that we can get the benefits without facing the dangers. He warns against too much involvement in the regulatory process from the established corporate behemoths like Google, Meta, and Microsoft. As Rosen points out, big corporations like heavily regulated environments because “licensure and registration requirements present barriers to entry to would-be competitors.”

In the end, Rosen summarizes his proposals in general terms that sound great, if we can get the details right. As Rosen writes, “We proposed warmly embracing the best that our contemporary golems have to offer by encouraging the continued development of technology that will extend and enhance human existence.” Even as he proposes leaning into AI, he also says that we need to build in “safeguards that will enable us to terminate our Entity if and when it threatens to escape our control.”

In his conclusion, Rosen reveals his own position in his four-quadrant characterization, explaining that he is “most sympathetic to the Positive Autonomist viewpoint,” i.e., the optimistic transformative category. But he is not mindlessly so. He ends with a directive that seems simple in its conception but may be difficult to execute: “Develop peacefully and don’t destroy the world.” This promising future for AI will be more likely to happen if policymakers rely on Rosen’s thoughtful book.

Like Silicon From Clay: What Ancient Jewish Wisdom Can Teach Us About AI
by Michael M. Rosen
AEI Press, 328 pp., $32

Tevi Troy is a senior fellow at the Ronald Reagan Institute and a former senior White House aide. He is the author of five books on the presidency, including The Power and the Money: The Epic Clashes Between Titans of Industry and Commanders in Chief.

Original News Source – Washington Free Beacon

Running For Office? Conservative Campaign Management – Election Day Strategies!