Connor Shorten
RAG pipelines, DSPy optimization, and vector search with a focus on code-level implementation.
Nutrition Label
Connor Shorten produces high-fidelity, code-centric tutorials focused on modern AI engineering, particularly RAG pipelines and the DSPy framework. His content is excellent for developers, as he frequently reads library source code and runs live notebooks to explain complex architectures. While highly educational, he is professionally affiliated with Weaviate, meaning his deep expertise is often paired with specific vendor advocacy.
Strengths
- +
- +
- +
Notes
- !Creator is employed by Weaviate; check descriptions for disclosures when vector search tools are featured.
- !Recap videos summarize external content and lack the hands-on code execution found in his tutorials.
Rating Breakdown
Breakdown across the key dimensions we rate. Methodology →
Recent Videos

Chunking with Generative Feedback Loops

Google Gemini 1.5 Pro and Flash - Demo of Long Context LLMs!

Llama 3 RAG Demo with DSPy Optimization, Ollama, and Weaviate!

Building RAG with Command R+ from Cohere, DSPy, and Weaviate!

Structured Outputs with DSPy

Adding Depth to DSPy Programs

Getting Started with RAG in DSPy!

DSPy Explained!

Approximate Nearest Neighbor Benchmarks - Weaviate Podcast Recap

Search through Y Combinator startups with Weaviate!

MosaicML Composer for faster and cheaper Deep Learning!

Jina AI DocArray - Documentation Overview

What lead Jina AI CEO Han Xiao to Neural Search?

Full Stack Neural Search

Python Tutorial: How to use Weaviate and Jina AI for Image Search!
Why this rating
Evidence receipts showing why each dimension is rated the way it is.
“Let's actually go into the code... this is the BootstrapFewShot class... let's look at the compile method.”[27:55] →
The creator leaves the notebook to read the actual library source code, explaining the internal logic of the optimizer rather than just the API surface.
“We can have a different language model for every single module in our DSPy program... maybe you want a really strong reasoning model for the generation, but you're happy with a faster model for the query generation.”[18:40] →
Articulates advanced architectural trade-offs (multi-model composition) and explains the specific utility of different models within a pipeline.
“Weaviate, the vector search engine that I work at...”[10:55] →
The creator explicitly discloses his employment with the company whose software he is demonstrating, ensuring full transparency.
“This is a recap of the podcast... I'm just going to be going through the blog post that was released alongside the podcast.”[00:08] →
The content is explicitly a secondary commentary on existing materials rather than a demonstration of a first-hand workflow or original benchmark run.