Workshop Proposal

Interrogating GenAI Augmentation for CHIworkers: Strategies for Professional Autonomy and Accountability

Sandhaus, Imteyaz, Almutairi, Prajod, Ramesh, Savage, Yang & Muller — CHIWork 2026, Linz, Austria

Abstract

As Generative AI (GenAI) becomes deeply embedded in UX research, design, and software engineering, the HCI community faces a pressing challenge: balancing the acceleration of output with the risks of de-skilling, loss of flow state, and diminished accountability. The goal of this workshop is to move beyond simple AI disclosure statements to figure out how HCI professionals can maintain “deep work” and intellectual autonomy, rather than merely “scraping by” using AI outputs, and when full hand-off may be appropriate. Participants will be invited to bring concrete examples of AI-augmented workflows they consider responsible, or irresponsible, drawn from active practice (personal experience or user research). Through mapping real-world AI use and collaboratively brainstorming countermeasures, this hybrid workshop will bring together researchers and practitioners to define what ethical “co-thinking, co-creating and co-augmenting with AI” looks like. Ultimately, we aim to produce a shared repository of evolving responsible practices or a professional code of conduct, helping the community transition toward accountable, professional GenAI-augmented workflows.

Motivation & Related Work

As GenAI becomes embedded across workplaces, it is reshaping the work practices of UX researchers, interaction designers, software engineers, data analysts, and content strategists alike, including professionals who rely on assistive technologies in their own workflows. In response, the community has begun developing guidelines for responsible use, most prominently through disclosure requirements that make AI involvement transparent. Disclosure is a necessary first step, but it does not on its own address whether the practitioner deeply understood the output, whether professional skills are gradually eroding, or who bears responsibility when AI-augmented work fails. Moving beyond transparency toward a more substantive framework for professional accountability is the central challenge this workshop takes up.

This challenge has roots in ongoing conversations within the CHIWork and broader HCI communities. Recent work has highlighted the tension between automation-driven cost reduction and Human-Centered AI (HCAI) that augments human capabilities. Furthermore, the broader HCI community is actively exploring the transformative impact of GenAI on research cycles, from prototyping to data analysis, while grappling with the corresponding ethical considerations of generated content and reproducibility. Building on these foundational discussions, this workshop addresses four critical themes:

This is a half-day, hybrid workshop held at CHIWork 2026. We will accommodate both in-person participants in Linz and a dedicated remote cohort. We will utilize Zoom for synchronized sessions (Welcome, Presentations, and Reflection) and separate breakout groups (physical and virtual boards) for working group discussions. The organizers will provide dedicated hardware to connect the physical room to remote attendees, ensuring a seamless hybrid experience.

Workshop Activities

Our planned activities are structured into three phases across a half-day schedule.

Before the Workshop

To ensure a high-quality cohort, we will actively recruit participants through HCI mailing lists and direct outreach to experts in GenAI and the future of work. Track 1 Position Papers will be peer-reviewed by the organizing committee for relevance and contribution to the workshop themes. Track 2 participants will be selected based on the depth and reflexivity of their 2-page AI disclosure. We will prioritize a diverse mix of career stages and practitioners. Accepted attendees will be invited to a dedicated Slack channel for asynchronous introductions.

This workshop is designed as a safe, inclusive, and reflexive space where participants can be vulnerable. To facilitate this, participants will be required to commit to a Workshop Code of Ethics that explicitly protects privacy and confidentiality during both workshop discussions and in the publication of materials. We actively encourage the open sharing of negative experiences, instances of “ethical friction,” AI fatigue, and moments where AI use felt inappropriate, changed the research direction unexpectedly, or compromised collaboration.

Main Workshop Schedule

TimeActivity
0:00 – 0:15 Welcome & Introductions: Setting the stage for a safe, reflexive space.
0:15 – 1:15 Presentations & Public Disclosures: Track 1 authors present 5-minute lightning talks. Track 2 attendees publicly read and share their disclosure statements. We will use these shared experiences to dynamically form our thematic working groups.
1:15 – 1:30 Coffee Break & Networking
1:30 – 2:30 Working Groups (Mapping Realities): Participants separate into thematic groups (e.g., “Qualitative Research,” “Collaboration”). Groups use virtual and physical sticky notes to summarize concrete examples of both responsible and irresponsible AI use within their HCI contexts.
2:30 – 3:15 Reframing Strategies & Mediation: Groups transition from clustering to identifying conditions and strategies for accountable AI use. Workshop facilitators act as mediators within each group.
3:15 – 3:45 Collaborative Code of Conduct Write-Up: Participants collaboratively draft write-ups of responsible AI use policies via shared templates. Assigned organizers serve as dedicated note-takers.
3:45 – 4:00 Reflection & Synthesis: Reflecting on future responsible AI use and outlining the shared code of conduct per CHIWork theme or phase.

Organizers

Our organizing committee brings together emerging and established scholars with expertise in responsible AI use. Junior researchers take on specific chair responsibilities; senior researchers assist with outreach and facilitate group sessions. See the Organizers page for full biographies and contact information.

Draft Call for Participation

We welcome you to CHIWork 2026! As Generative AI (GenAI) embeds into HCI workflows, balancing efficiency with accountability is critical. The “vibe coding” era and AI-assisted analysis have widened the “accountability gap.” This workshop seeks to professionalize GenAI-augmented workers by defining ethical “co-thinking,” shifting beyond simple disclosure toward intellectual autonomy.

We invite researchers and practitioners to join this half-day hybrid session via two submission tracks:

Full details, templates, and submission instructions are on the Call for Participation page.

Post-Workshop Plans & Dissemination

Accepted participants will join a community of practice aimed at producing a shared repository of responsible HCI behaviors. Accepted position papers will be published open-access on arXiv. The primary synthesized output of the workshop will be a living “Professional Code of Conduct for the GenAI-Augmented HCI Worker,” shared via this website and/or OSF. Following the workshop, we intend to invite interested attendees to co-author a joint publication synthesizing our collective strategies for accountable GenAI use. We also intend to disseminate our findings through a summary article in an HCI magazine or journal, highlighting the evolving nature of ownership and accountability in the GenAI era.

Appendix: AI Use Statement Framework

While participants are free to structure their disclosures as they see fit, we provide the following thematic framework to help guide reflexive accounts of GenAI integration. We encourage participants to move beyond simple disclosure toward a deep reflection of agency, accountability, and “co-thinking.”

HCI Research & Design Cycle

Characterize your GenAI usage across the following phases:

Thematic Areas for Reflection

Agency and Originality
How do you maintain status as the primary “arbiter” of the final design? Have you noticed shifts in your professional identity?
Co-Thinking in Development
Which tasks have you delegated to AI, and which have you reclaimed for “deep work?”
Empathy and Quality Integrity
Describe a specific instance where AI-driven acceleration felt at odds with qualitative depth. How did you verify rigor?
Resistance and Re-imagining
Where have you intentionally rejected AI use for ethical or professional reasons?
Professional Policies & Boundaries
How do institutional standards shape your sessions? When do you choose to STOP using AI?

Note on AI use in this proposal: The authors assert full authorship over this proposal. Planning steps and thematic decisions were made entirely without AI assistance. Gemini and Grammarly were used as writing assistants to refine language and structure, with complete oversight retained by the authors.