The AI Revolution Meets Higher Ed at the ASU+GSV AIR Show

Artificial intelligence mingles with education at this inaugural event.

The AI market’s hottest new conference

The ASU+GSV AIR (“AI Revolution in EDU”) Show concluded last week at the San Diego Convention Center. The inaugural, high-energy event spotlighted a burgeoning market of mostly fresh-faced AI tool providers jockeying for position in the, as the AIR Show puts it, “Pre-K to Gray” market.

The AIR Show convened ahead of its co-located Summit, which I did not attend because, well, the AIR Show is free and the Summit costs $2,450 (the 2025 early-bird rate). While this was the first year for the Show, the Summit dates back to 2010 and features a much higher-profile speaker roster, earning the title “the Davos of Education” from Forbes.

So while Choice’s travel budget currently excludes such Davos-adjacent events, I nevertheless did get a chance to meet some of the AI vendor market for higher ed and hear what administrators and others have to say about it.

Running April 13–15, the AIR Show is cohosted by Arizona State University—notable as the first higher education institution to partner with OpenAI and implement its ChatGPT Enterprise edition—and GSV, a venture capital firm that invests in education technology. The event’s footprint consisted of main and secondary stages situated around a small-ish exhibit hall (the event advertised about 150 exhibitor/sponsors). Breakout sessions were conducted upstairs in the mezzanine level. Attendees seemed to be primarily from the K–12 sector—a conclusion I came to when Bill Nye the Science Guy walked out to a standing ovation for his afternoon keynote. But really, who isn’t a fan?


🌟 Subscribe to the LibTech Insights newsletter for weekly roundups and bonus content, including: 


The AI vendor report

My travel companion was none other than Gary Price, and together we made quick work of the exhibit hall, limiting ourselves to the vendors that targeted the higher ed sector. The big players were there—Google, Turnitin, Microsoft, Intel—but we focused on the smaller service and tool providers built on top of LLMs. These ranged from existing providers that have since added AI functionality to their platforms (PowerNotes, QuickTakes) to small startups that provide specific AI-based assistance to faculty and students (Prof Jim, Stellia, HiTA AI). At least one company, GPTZero, aims to take on Turnitin as an AI detector that also helps students use AI responsibly. Gary and I heard the phrase “academic integrity” a lot.

It’s worth noting that many of these tools are still very much in development, and despite what you might hear from mainstream media, it’s clearly very early stages for AI integration into higher education. Yes, students have already been using it to help with assignments, but faculty, administrators, and librarians are still developing appropriate use cases and literacy frameworks and slowly and deliberately exploring how AI can be incorporated across the institution.

Vendors we spoke to, especially those that have developed tools for faculty and students, seem to be looking for an audience (read: buyer) and were a bit unsure how to approach the institution. This is a big opportunity for librarians. It’s safe to say that AI’s impact on scholarly communication is going to be next-level and, frankly, the library should be at the center of any implementation. The developers we spoke to, for the most part, had not considered that.

However, it’s important to note, especially for libraries, these tools will need constant marketing so faculty and students even know they’re available. They also, as Gary pointed out to me, require a fairly steep learning curve. This means time and effort for educators and students. Will they commit or simply rely on the easiest and cheapest solutions?

AI’s big problem: data privacy

And let’s not forget about data. AI can’t evolve without data and at least one speaker panel talked about learning tools and enterprise-wide AI applications (think personal tutors, admissions support, social support, student success) that will require an alarming amount of tracking and personal data to implement properly.

For example, MIT, Georgia State University, and Quinsigamond Community College are partnering on a project sponsored by the Axim Collaborative to develop a GenAI tutor, in this case for an introductory class on Python programming. For an AI tutor to work effectively, the project principals have determined it needs to be personalized to a student’s background, learning abilities, etc. In other words, the technology can’t be a one-size-fits all solution that is dropped onto the student. It needs inputs about the student to help it appropriately tailor its support. The personalization is required for inclusive and equitable learning and outcomes.

The same can be said for enterprise-wide applications. Take broader student success initiatives, for example. AI can be used to preemptively identify at-risk students, but only if it’s fed a wide range of data—the more disparate, the better. In both of these examples, the motives and benefits are entirely noble and necessary and today’s digital natives may welcome the personalization. But I couldn’t help but wince at the amount of data needed to make these solutions work.

For me, this event also signaled a divergence in the commercial and academic sectors. Solutions are churning out at a rapid clip, but librarians, faculty and administrators are slow-walking AI implementations in typical academic fashion, which, for now, is probably for the best.


🔥 Sign up for LibTech Insights (LTI) new post notifications and updates.

✍️ Interested in contributing to LTI? Send an email to Daniel P. at Choice with your topic idea.