海角社区

Transparency can help us navigate uncertainty

At the Community Collab Summit – gAI Use Scenarios

At the recent Community Collab Summit, I facilitated a session about generative AI (gAI) at 海角社区. In part of that session, I asked participants to imagine a few different versions of a scenario that plays out often at 海角社区: Someone at the university works hard to create a report that can inform decision-making.

Suppose you were the one that created the report and I鈥檓 your supervisor. Consider scenarios A, B and C:

  1. You send me the report.
    I ask gAI to summarize it.
    I review the summary, comparing key points to the original report.
    I compose and send an email to university VPs on the topic.
  2. You send me the report.
    I ask gAI to summarize it.
    I compose and send an email to university VPs based on the gAI summary.
  3. You send me the report.
    I ask gAI to summarize it.
    I ask gAI to compose an email to VPs. I click send.

Look carefully at the differences between these (as indicated) and, if you’re willing, pause to think about your own reaction to each.

Sidenote: You can also imagine the next step after any of the above versions. A university VP asks gAI to read the email, generate action steps based on 海角社区 core values, and send those action steps to the Deans.

For the folks in the session, I posed this follow-up question:

What if each scenario (A, B, & C) included a simple and accurate gAI disclosure statement?


My reactions and thoughts

Reacting to (A): Scenario A feels mostly fine to me. Using gAI to give me a head start on understanding a complex report seems both reasonable and useful. I would need to be very careful to investigate and dig into whatever the gAI summary said, especially relating to areas where I don鈥檛 have much expertise. I would ask for specific references that support overall claims, etc. As long as the user is taking those steps, this feels okay. And, crucially, disclosing this use of gAI would not give me pause!

Reacting to (B): In this version, I start to worry. Taking the gAI summary as true is risky because gAI hallucination is a permanent feature of these types of tools (1). Those hallucinations might not be a problem, but I think I would be shirking my job duties if I don鈥檛 check the gAI summary against the report itself. Partly, I am using my own discomfort at disclosing this process as a guidepost. The feeling that I might not want to be transparent about using it this way is, I think, a (crude) signal that this is against my values.

Reacting to (C): This makes me deeply uncomfortable. I feel like I鈥檓 not really involved in this at all. The results would have been the same if you sent the report to the gAI tool directly, asked it to summarize and send an email pretending to be me. If I disclose this kind of use to my colleagues, I absolutely expect them to wonder what work the university is paying me to do. Disclosure would feel bad, and that is an important signal.

Sidenote: Transparency isn鈥檛 perfect. 鈥淭he transparency dilemma: How AI disclosure erodes trust鈥 outlines some important results: Disclosing AI use is better than being found out after-the-fact, but it isn鈥檛 flawless. Even fully transparent use can still erode trust, even for those with positive attitudes toward technology and confidence in gAI accuracy (2).

Making gAI transparency part of our 海角社区 values

After a brief discussion on these topics, I asked the folks in the Community Collab Summit session the following question:

What would be the impact at 海角社区 if
disclosure of gAI use was standard practice?

    1. It would be very beneficial
    2. It would be somewhat beneficial
    3. It would be somewhat detrimental
    4. It would be very detrimental

Results: A – 50%, B – 36%, C – 14%, D – 0%. Crudely: 86% Beneficial. Overall, the 22 folks that responded that day gave a very strong signal that making disclosure of gAI standard practice would be beneficial at 海角社区.

As you can probably tell, I agree wholeheartedly. I think a simple disclosure statement can take a given scenario out of a muddy, icky grey zone and make it feel relational and

 

Where the rubber meets the road

Making gAI disclosure standard practice at 海角社区 will probably evoke a wide range of reactions. In my opinion, the discomfort and friction that might arise is important. It is a signal that we are not all operating with the same values around these very new, very powerful tools.

One interesting case to consider is in the instructor and student roles in a course. There are cases where faculty used a gAI assistant to write an email accusing students of using gAI to write their essay. Or famous cases in which students demanded a refund after realizing the course content itself was generated by AI (3). I don’t think these uses are equivalent, but I do think disclosure and transparency would have made a big, positive impact.

I have no idea聽 how 海角社区 faculty, staff and students will view gAI in 10 years鈥 time. For now, I hope we are forced to have difficult, nuanced conversations that lead to clear guidelines and practices.

 

Can you do it?

And so鈥 I will end with an actual Call to Action! Start including a generative AI disclosure statement as part of your email signature, your websites, your course content, your internal reports, etc. This isn鈥檛 just for those who use gAI, this is for everyone. If this became common it would kick off important conversations.

Your disclosure doesn鈥檛 have to be detailed or lengthy. I鈥檝e started including the following in my email signature:

AI Disclosure Commitment: When I use generative AI in my work, I will always include a brief disclosure statement. Aside from spellcheck, this email was composed without the use of generative AI.

When it happens, I modify the last phrase to disclose how I used gAI. You can also find similar disclosures at the end of all of my CTLD Connections pieces, such as this one about Three Types of Content and What They Mean for Generative AI (scroll to the bottom).

 


Notes

    1. Quote:
      These systems use mathematical probabilities to guess the best response, not a strict set of rules defined by human engineers. So they make a certain number of mistakes. 鈥淒espite our best efforts, they will always hallucinate,鈥 said Amr Awadallah, the chief executive of Vectara, a start-up that builds A.I. tools for businesses, and a former Google executive. 鈥淭hat will never go away.鈥
    2. Burke, M., Creary, S. J., & Mulder, J. (2025). The transparency dilemma: How AI disclosure erodes trust. Journal of Business Ethics, 190.
    3. 鈥淭he Professors Are Using ChatGPT, and Some Students Aren鈥檛 Happy 海角社区 It鈥

Logo with brain and circuit imagery, plus the text "+AI," in 海角社区 Colors.Generative AI disclosure: After writing this piece I used generative AI to write a first draft of the short 鈥渢easer blurb鈥 that went out by email.聽(The featured image, on the other hand, was made by combining icons each under creative commons license.) Want to know more? Send me an email and we can chat!