John Wise β system:impacted
I spent nearly two decades incarcerated in Florida. That experience is the foundation of everything I build here -- not as a story I tell for credibility, but as a way of seeing how institutional systems actually work when you're the one being managed by them. The data, the records, the algorithms that govern incarceration are tools of control, and they operate most powerfully when they're invisible. This site is where I work to make them visible.
The core of that work right now is public data investigation and transparency infrastructure. I analyze prison mortality and disciplinary data in Florida's carceral system, and I'm building open-access tooling -- including Model Context Protocol (MCP) infrastructure -- to make incarceration and law enforcement data radically accessible to journalists, researchers, and the public. MCP matters here because it turns static datasets into something AI applications, newsrooms, and investigative tools can query directly. The goal is infrastructure that doesn't just publish data but makes it structurally impossible to ignore.
Alongside that, I build and maintain self-hosted, privacy-first tools -- automated news monitoring systems, personal data pipelines, tracking and analysis infrastructure -- all running on hardware I control, using open-source software, with no data leaving my machines unless I decide it should. That's not a hobby preference; it's a political position. The same surveillance architectures I study in carceral systems -- data fusion platforms like Peregrine, license plate reader networks, social media surveillance middleware -- are the ones I refuse to participate in with my own tools and data.
I remain deeply skeptical of the venture-capitalist hype around artificial intelligence, especially the narrative that equates all AI with Large Language Models. But I use LLMs deliberately and critically -- including running local inference on my own hardware -- because the technology is genuinely powerful when you control it, when you understand what it's doing, and when it's designed with transparency in mind. The ideas documented on this site focus on ethical AI, semantic structures, and meaningful interactions through open standards like RDF, SPARQL, and linked data -- systems that prioritize human dignity and public accountability over extraction, surveillance, and profit.
Everything here is deliberately unfinished, openly evolving, and explicitly experimental. I'm a graduate student in Applied Data Science at Syracuse University's iSchool, concentrating in AI, and this space grows alongside my studies -- statistical methods, database engineering, machine learning, natural language processing, all of it feeding back into the transparency and accountability work. This website is an invitation to think with me about how technology, data, and AI might better serve justice and genuine public good.
If you're just curious about me as a person, you're welcome here too. I share pieces of my life, reflections on reentry, pictures of my cats, and what it's like to rebuild a sense of home and self after so many years away.