How I Slacked at Work and Accidentally Built an Automated, AI Driven ETL Pipeline
February 11 @ 10:00 am - 10:50 am
‘Learning new technologies like Artificial Intelligence can feel risky when you are working with critical research data. It started when I was daydreaming at work about my Dungeons & Dragons campaign and realized the best way to learn these tools (for me) was to take the ”work” out of the equation.
Join me for a case study on how I used a personal D&D database as a ”sandbox” to learn high-impact data skills. By working with data I knew intimately (and didn”t mind breaking), I was able to experiment with AI tools, fail safely, and eventually build a working automated pipeline. This isn”t a talk about being a coding genius; it”s about how a librarian harnessed curiosity and personal passion to gain a better understanding of how these often intimidating black-box tools actually work.
This session is designed for:
- Researchers, Scholars, and Practitioners interested in ”Ground Truth” testing—using known data to safely audit the strengths, weaknesses, and limits of AI tools before trusting them with your scholarship.
- Librarians who want a practical look at how a researcher might approach the steep learning curve of this landscape. This session offers a viable path for building the technical vocabulary to translate user needs to technical experts/developers as well as applying these skills to your own work.
- Technical Experts & Data Scientists who support research. By watching a ”research hacker” navigate the messy reality of AI integration, you will see how users who ”know enough to be dangerous” actually approach these problems—offering insights on how to better anticipate pitfalls and guide the campus community.
Speaker(s): Beth Tweedy’
