- Home
- Articles
- Reviews
- About
- Archives
- Past Issues
- The eLearn Blog
Archives
To leave a comment you must sign in. Please log in or create an ACM Account. Forgot your username or password? |
Create an ACM Account |
Artificial intelligence (AI) is a broad term that encompasses a large, maturing set of tools that higher education is slowly learning to embrace. Large language models (LLMs), what most think of when hearing about AI, seem to excel at the interface of humans and technology. As our tools have become increasingly digital, the human/tech interface is often the most frustrating component of the process. As such, AI is quickly growing as an e-learning tool to remove barriers, advance student learning outcomes, and bolster student support.
From early composition courses to high-level research courses, each of us can recall instances of struggling to find the sources we desperately needed to cogently argue a point or to bridge two parts of a theorem (I’m a mathematician). As research tools evolved (e.g., library cataloging systems, digital card catalogs, Google Scholar, etc.), these innovations helped us improve the efficiency of our searches—they did not do the work for us. In this interview, colleagues from Maryville University (St. Louis, MO) share an innovative use of AI/LLM and how their tool serves as the next iteration in that cycle of innovation.
Amber Spratlin is the Library Research and Instruction Specialist for Online Programs at Maryville University. Her work involves reference and instruction assistance for online students, as well as library systems administration and interlibrary loan operations. Sara Bronson is the Digital Experience Developer at Maryville University. Her work centers around improving the student experience through artificial intelligence and other integrated technologies, both inside and outside the classroom environment. Spratlin and Bronson partnered across disciplines to create Maryville University’s Library AI Search Companion, each critically contributing to the project as subject matter experts.
Amber Spratlin (AS): The Maryville University Library constantly uses data to improve our services, resources, and design. The path to an AI Search Assistant actually began with the implementation of another product called Lean Library. The data show that our students start their research process in Google rather than the library’s website. Looking to validate that preference, we adopted Lean Library in 2023. Lean is a browser extension that connects the open web to the library’s holdings and interlibrary loan services, enabling users to take advantage of the library’s collection while using the search tools that make sense to them.
Positive user feedback for Lean led me to ask the next question: What makes Google so much more usable than the library’s discovery layer? There are many answers, but it boils down to natural language processing. Google does it well, and library systems still don’t. Library vendors have made huge strides in recent years, but a third of the reference questions our library receives revolve around how to use library interfaces. Meanwhile, most people can successfully use Google.
This is where the idea for an AI Search Companion started to take shape. I thought, “What if we removed the barrier with an AI companion meant to turn natural language queries into optimized library search strings?”
Shortly after having the idea, I learned that Maryville created an AI Committee, which issued a challenge to the campus community. They wanted original ideas to improve our student experience with AI. I pitched the Search Companion, and they were excited by the idea because of its broad reach across our student community.
Sara Bronson (SB): We are! The Office of Digital Learner Experience (DLX) has been diligently working on integrating AI into the classroom. I am most involved with our Social Learning Companion (SLC). Social learning companions are AI-powered chat tools that are typically embedded within courses and available to students 24/7. To build these SLCs, I work with subject matter experts (typically faculty) and gather Maryville-authored course materials that serve as the bot’s knowledge. Most SLCs are able to answer questions about course content and generate study materials like quizzes. The SLCs reach multiple disciplines across Maryville, including math, composition, graduate nursing, nutrition, psychology, computer science, library, and many more are coming down the pipeline every week. Every SLC is unique and specific to what their department and students need. For our students, embracing AI and providing tools like the SLCs demonstrates our student-centered approach.
SB: We worked together to collect the documentation we thought would be needed for the AI to learn. However, we quickly found out we needed more documentation. Amber supplied more documentation, and we were able to change the prompting to improve the user experience. We repeated this process multiple times, getting it just right. When building the prompt for the AI, there were some components we had to tell the bot to omit.
AS: Exactly. For example, originally, I thought that the bot would make stronger search strings if it used subject searching rather than just keyword searching. Unfortunately, that didn’t work quite as well as we’d hoped, so I asked Sara to re-prompt the bot to eliminate that behavior.
We also noticed that the bot was having trouble with publication types. I created additional documentation that Sara integrated to help the bot learn how to choose the right field code for the request. When I originally made the publication type documentation, I used document formatting (tables) and written explanations that I would use to teach a person the information. The results were inconsistent, so I learned to “think like a bot” and present the same information differently. I've appreciated Sara’s flexibility as I created multiple iterations of the same information.
AS: We have approached this as an iterative process, and any multi-step problem presents its own challenges. The first step involved ensuring that the chatbot can effectively generate search strings that yield relevant results. The art of creating just the right search string is partially dependent on syntax (which machines do well) and partially dependent on creativity (which humans do well). This foundational capability is crucial for the chatbot's overall success in retrieving useful information. Like we mentioned before, test results were initially variable but always promising. After working together to re-engineer our prompts and the supporting documentation used by the bot, the search strings it produces are overwhelmingly successful and specific.
SB: Initially, the bot generated the search string, and users would copy and paste what it provided into the library's discovery search. During phase two of development, we created new prompts to teach the bot to translate the search strings it generates into result URLs in the library's EBSCO Discovery Service platform. This streamlined the information retrieval process further and improved the user experience. As we begin phase three of development, we will continue to use the iterative approach that has brought us success in steps one and two.
AS: Like we mentioned, we’ve been through several iterations of documentation that the bot uses to provide answers. You would be surprised at how much a small thing like punctuation matters in a search string. Once we had all “i’s” dotted and “t’s” crossed in the training documentation folder, we then moved forward with giving it to students. And of course, if the bot isn’t able to parse a question, it’s prompted to provide links to the library’s 24/7 live chat service and our reference contact information.
SB: We are currently conducting thorough testing rounds. This involves collaboration within Maryville’s division of Digital Learner Experience (our Information Technology, development, and technology innovation division), engaging our library staff, and ultimately including students in hands-on sessions. After gathering data from the hands-on sessions, we synthesized what we learned and created documentation for the students that highlights best practices for interacting with the bot. This will be provided to the next group of testers in a pilot. We already launched pilot groups with computer science, nursing, and education students in Spring 2025 and gathered additional data and feedback for future improvements.
SB: Our networking and security team have strict guidelines and requirements for all of Maryville’s systems. We partner with Capacity (an AI-powered support automation platform) for several chatbot tools, and they have a high level of security as well that ensures our data does not leave our system. Maryville’s SLCs have limited access to only data in a folder we provide to them. Since Capacity works with our SSO, users are able to authenticate safely as well.
When I learned of this application of AI at Maryville, I was impressed by Bronson and Spratlin’s thoughtful, student-centered implementation for such a practical purpose. Their discovery-centered tool is one of accessibility. This use of AI removes the disadvantages experienced by most researchers—especially undergraduates—surrounding the need for traditional Boolean search functionality in library discovery layers. Instead, this tool provides a mechanism to convert natural language conversation into search strings with a high level of sophistication.
Jon Ernstberger is the Chief Academic Officer at Point University in West Point, GA. He earned his doctorate in Applied Mathematics with a Computational Concentration from NC State University in 2008 and has research interests focused on elearning, pedagogy, student success, and computational applications of mathematics.
© Copyright 2025 held by Owner/Author. 1535-394X/2025/07-3757326 $15.00 https://doi.org/10.1145/3748495.3757326
To leave a comment you must sign in. |
Create an ACM Account. |