News: Academic Publishing Weekly

Book fair controversies, new APC model, and should we really be concerned about AI?

By Choice Staff
Academic publishing weekly graphic. Reads "the latest curated news from around the industry." Dark purple background with geometric shapes.

Arab World Publishers and Associations Pull Out of Frankfurt Book Fair

The Frankfurt Book Fair opened this week and, with the backdrop of the war between Israel and Hamas, Arab publishers and publisher associations have withdrawn from the event. According to Publishing Perspectives, exhibitors that have opted out include the Arab Publishers Association, the Emirates Publishers Association, Sharjah Book Authority, and PublisHer. Publishing Perspectives and The Bookseller both cite reports of decisions and statements made by show organizers and awards producers that prompted the exhibitors to withdraw—not to mention concern and fear for those who are unable to travel due to obvious safety issues for themselves, friends, and family.

Scholastic Accused of Censorship with Latest “Diverse” Book Fair Collection

In an apparent case of heavy-handed corporate caution, Scholastic is coming under fire for the roll-out of its diverse stories book fair collection called “Share Every Story, Celebrate Every Voice.” In what it describes as an attempt to spare local elementary schools and school libraries from further legislative risk in states and districts that are banning books that foreground marginalized voices and history, Scholastic is deferring to librarians on whether or not they want the collection included as part of their school’s fair. Critics claim the plan is censorship at worst and, at best, provides schools and local officials with an easy way to continue their book bans. Scholastic says it and other publishers have been put in “…an almost impossible dilemma: back away from these titles or risk making teachers, librarians, and volunteers vulnerable to being fired, sued, or prosecuted.”

A New Location-Based APC Model for OA and the Cundill History Prize Finalists

In what it’s calling a “publishing industry first” Elsevier is piloting a new open access pricing model that’s based on geographical location. Called Geographical Pricing for Open Access, the plan will base its article publishing charges (APCs) on gross national income per capita. The company says it “aims to reduce financial barriers that have traditionally hindered researchers and institutions from low and middle-income countries from publishing the latest research in Gold Open Access journals.” The new pricing model is scheduled to take effect in January 2024. In the meantime, Elsevier says it will continue to waive APCs for authors in the lowest economic GNI band. The Cundill History Prize, which is administered by Montreal’s McGill University and includes a top prize of $75,000 for the best history writing in English, has announced its 2023 finalists. The three titles are Tania Branigan’s Red Memory: Living, Remembering and Forgetting China’s Cultural Revolution, published by Faber & Faber, W.W. Norton; Kate Cooper’s Queens of a Fallen World: The Lost Women of Augustine’s Confessions, published by Basic Books; and James Morton Turner’s Charged: A History of Batteries and Lessons for a Clean Energy Future, published by University of Washington Press. Runners up each receive a $10,000 prize.

AI and Misinformation? Nothing to See Here

If you think generative AI is bound to trigger a next-level misinformation event—thanks to its ability to produce, when prompted correctly, content that’s almost indistinguishable from human-generated content—you’re not alone. But there are a few researchers who think you should, you know, calm down, man. A report out from Harvard Kennedy School’s Misinformation Review called “Misinformation reloaded? Fears about the impact of generative AI on misinformation are overblown” lumps these concerns in with the usual “moral panics” that pop up whenever a new technology goes mainstream. The report’s authors contend that manipulation of media is nothing new and our society hasn’t imploded (not yet, at least). Further, people tend to consume their information from a small set of trusted outlets and AI-generated misinformation will be “…largely invisible to most of the public.” There are other arguments the authors make that rely on “people’s preexisting trust”, or lack thereof, in individuals and “the media”, but we’ll defer to the information professional community to determine what influences misinformation and what doesn’t.