AI usage disclosures: my approach and many questions

Something I started thinking about this week is how we should be disclosing our use of AI. As the technology improves, it is more difficult to determine what is made by a human and what has been conjured up by a machine. But what is the best way to do this? And what are the drawbacks?

Just to get this out of the way, I don’t have an answer to any of these questions. But as someone who defaults towards transparency, it seems to me that providing disclosure is necessary at a time when everything can be difficult to believe.

What actually got me started on this was em dashes, of all things. I like using them in my writing. They’re like little speed bumps of information. As it turns out, all the LLMs seem to like them too, which might create a false positive that something I’ve actually written was AI-generated content. I do use Claude Code in my writing only to fix grammatical mistakes. I typically write directly in .mdx files using VSCode, and I haven’t been very impressed with spelling and grammar checking extensions.

I use AI in other areas of this site as well. In my photo essays, I will often use LLMs to help generate alt text. I also use agentic coding to build features for this site, to learn and save some time because I have other things in my life I shouldn’t be neglecting, like the laundry that’s been sitting in the dryer for the last three days.

One idea was creating a little disclosure blurb at the bottom of each post, but that felt like a more heavy-handed approach than necessary. What I settled on was a page in the footer—on similar footing as a privacy statement or terms & conditions.

Take a look at my AI usage disclosure before reading further.

Since this is my personal site, I chose to write it in my own voice rather than using legalese. The text covers the three main ways that I use LLMs: fixing grammar, alt text, and some coding. I also included some text describing how I am not using it. There are some gray areas that are difficult to capture, such as if I were to accept autosuggestions—particularly if those suggestions were what I would write anyway—and bad code that takes me less time to correct than had I started completely from scratch.

But there’s a lot that I didn’t include in the disclosure, because I’m not sure if it meets the threshold, a threshold that is largely undefined. Sometimes I’ll use an LLM to brainstorm, test an idea, or come up with a name for something. Is that something I should include? What is the threshold even? Would it need to go as far as describing which models I used? Should I give prompting examples? Seems exhausting.

Transparency can have a cost too. When I talked about AI usage disclosure with one of my coworkers, he mentioned a recent study that showed that while white males are seen as innovative when they use AI, there was a competence penalty for women and minorities who used AI to produce identical work. Why am I not shocked? That alone is pretty frustrating, but even more so as every leader in tech is forcing their employees to use AI for gestures wildly everywhere or face the consequences.

AI-generated propaganda poster of defense secretary, pointing at the view, with the text 'I want you to use AI. Go to genai.mil today.' It is placed on a column in the pentagon with several uniformed service members walking by.
Image source: reddit user falken_1983

That leaves me with more questions. Is this performative? What really matters more on a site like this—process or output? Am I coming at this from a place of privilege? How might attitudes shift as AI tooling becomes more ubiquitous?

I still believe that disclosures are helpful. Regarding the competence penalty, we clearly have work to do, and we need to make sure our organizations have appropriate policies and procedures in place. In a prior study, the same penalty applies to working overtime.

While an AI usage disclosure is completely voluntary, it does offer the audience some helpful context to understand how content is produced. Unfortunately, there’s no way around (at least right now) this being completely reliant upon the honor system.

And on the internet, nobody knows you’re a dog.