GenAI is a new cognitive infrastructure for universities. It redefines the role of the university as a cognitive and infrastructural institution and forces a redefinition of the concepts of "knowledge," "authorship," and "critical thinking." The challenge for universities is how to design a way to coexist with AI so that technology enhances human cognitive agency rather than replaces it. This means moving from prohibition to transparency and trust, from "AI for writing" to "AI for thinking," from product evaluation to process evaluation, as well as investing in data and infrastructure sovereignty, and a culture of reflection and accountability. This path ensures that universities maintain their role as institutions of longevity and public trust.

Why this document? To clarify the debate over AI around what matters most in universities: the quality of education, research integrity, and public trust. We demonstrate that well-designed use of GenAI can enhance—not undermine—human cognitive agency if we treat the technology as a component of infrastructure and organizational culture, not an ad hoc add-on.

The future of higher education in the age of generative AI has not yet been decided. The path we choose depends on our collective reflection, the courage to ask difficult questions, and the willingness to seek new, sometimes less obvious, directions.

GenAI in Higher Education: Recommendations for Public Policy and University Governance

The report, "GenAI in Higher Education: Recommendations for Public Policy and University Management," organizes the most important threads and clearly demonstrates how universities can benefit from this change without compromising quality, ethics, and trust. The report aims to systematize the debate, provide in-depth analysis, and formulate strategic guidelines for public policy and university management. We invite you to read and collaborate on a university design where GenAI becomes a tool that supports us in the processes of teaching, research, and organizing work.

This report was created based on the experiences and examples of participants at the "GenAI in Higher Education: New Perspectives for Research and Teaching" conference (University of Warsaw, May 29–30, 2025), organized by DELab UW in partnership with the Ministry of Higher Education and Higher Education. We would like to thank the conference participants, panelists, workshop leaders, and teams who shared materials, data, and lessons learned from their own implementations.

The event created a truly interdisciplinary and intergenerational space for the exchange of ideas, fostering dialogue and collaboration beyond traditional boundaries. The openness and willingness to share experiences meant that, in addition to presenting research and reflection, there was also room for building lasting relationships—both professional and personal.

Conference participants emphasized that generative AI is more than just a prompting technique or an optimization tool. The focus was on the question of the educational, social, and even moral value that technology brings to teaching and research practices. Conference participants demonstrated considerable openness, eagerly challenging the simplistic narratives and media clichés that often dominate discussions about AI.

One of the greatest strengths of the event was the diversity of perspectives – from enthusiasm for new opportunities to a measured skepticism about risks and challenges. The common denominator, however, was the need for an informed, attentive and flexible approach – one that takes into account both academic, social and ethical realities.

Neil Selwyn, a digital education researcher at Monash University, set a reflective and critical tone for the conference, opening with his lecture, "GenAI in Higher Education – Some Things We Need to Talk About." He noted that the debate over AI in higher education is not merely technical: it reveals axiological, epistemological, and social tensions that universities must recognize before moving toward operational solutions. In this spirit, he proposed a framework of four coexisting perspectives—ranging from enthusiastic to critical and pragmatic—as a tool for organizing institutional debates and decisions.

Selwyn introduced a clear ethical criterion: AI integration should stem from the university's mission and values, strengthening cognitive trust and participation, not undermining them. Instead of further prohibitions, he advocated transparency of use, a code of cognitive values, and co-creation of policies with students and faculty. His key insight, that the university's role is to sustain the ability to ask important questions, not merely produce quick answers, became an important reference point for the report's deliberations and conclusions.

DEBATE 1 – The Fix or the Break GenAI in Higher Education – N. Selwyn, P. Kahn, Z. Lalak, R. Włoch

The debate has highlighted the tension between enthusiasm for GenAI and the protection of academic values, which is why universities should combine innovation with systematic risk-benefit assessments. Instead of rapid implementations, clear, public rules for AI use (authorship, privacy, data transparency) and practical support for faculty are needed. Assessment must be redesigned: more emphasis on process, critical thinking, and reasoning, and less on the "finished product." Students should co-create the policies—consultation improves their quality and validity. Responsible use training, standards for documenting model work, and investment in data infrastructure are crucial for research. To avoid deepening inequalities, it's worth promoting open solutions, local data, and multilingualism. A cross-disciplinary AI team, pilots with performance metrics, and regular policy reviews will be organizationally beneficial. Clear communication of the policies and building a culture of trust will allow GenAI to "fix" what needs changing, rather than "breaking" what works.

DEBATE 2 – What kind of university do we want in the face of AI development? A. Giza-Poleszczuk, E. Krogulec, A. Szeptycki, J. Uriasz, K. Śledziewska

A key moment of the conference was listening to students, who presented their own expectations for education in the AI ​​era. Their demands became the impetus for a frank discussion with representatives of universities and the Ministry of Science. The debate outlined the vision of universities "with AI and for people": clearly defined values ​​(autonomy, reliability, inclusiveness) should guide the use of AI tools. Redesigning assessment around process, critical thinking, and evidence of reasoning is crucial, while simultaneously supporting faculty in the practical application of the principles. Equal access (accessible, multilingual solutions) and research standards were emphasized: transparent data, documented model work, and investment in infrastructure. Participants emphasized student co-creation of principles and university collaboration with the economic environment to ensure that graduate competencies meet real market needs. While the conclusions weren't always comfortable, they did clearly demonstrate where universities can improve their strategies in the face of technological transformation.

DEBATE 3 – What Strategy Do Universities Need in the Age of AI? From Vision to Action. Z. Hazubska, E. Jaskulska, A. Baczko-Dombi, K. Śledziewska, M. Słok-Wódkowska

We discussed the specifics of Polish universities and the strategic choices they face in the development and implementation of generative artificial intelligence at universities. The debate, "What strategy do universities need in the age of AI? From vision to action," focused on how to translate declarations into concrete management mechanisms—from priorities, through funding, to performance metrics. From the perspective of university authorities and the state administration, the need for consistency between university strategies and public policies was emphasized, as well as clear roles: who is responsible for law, accreditation, and who is responsible for implementation and operationalization at faculties. The panel emphasized the creation of a program—a portfolio of AI projects (a short "must-do" list), with an assigned budget, ownership, and deadline, so that AI does not become a collection of scattered experiments. Implementations should be based on common services and standards (data governance, security, accessibility), which reduces costs, legal risks, and "reinventing the wheel" within institutions. University "regulatory sandboxes" and rapid pilots with simple KPIs (education quality, staff time savings, costs) were recommended, followed by a decision: scale or close. Attention was paid to compliance with accreditation and education quality: programs and assessments must maintain verifiability of learning outcomes and transparency of AI use by students and staff. Key investments concern competencies ("task-based" training for staff and administration), data infrastructure, and tools for documenting work with models, which enhances the replicability of results and security. In terms of collaboration with the environment, procurement frameworks and partnerships were recommended to minimize vendor dependence and favor open solutions where possible. The crucial role of continuous communication of progress and listening to the voices of students and teachers was emphasized – the strategy should be "living," with cyclical review and course correction.

DEBATE 4 – GenAI practices at universities. M. Kołodziejska, A. Mierzecka, J. Chodak, K. Filipek, M. Paliński 

Finally, we asked ourselves what can be done "here and now"? Participants in the debate recommended short pilots with simple KPIs (time, quality) to select truly useful tools. Central standards for working with AI (versioning, usage logs, "AI-assisted" annotations) and a catalog of recommended tools are needed to limit "shadow AI." In research, documenting interactions with models and a data policy for replicability are key; in teaching, clear "usage paths" instead of prohibitions, and tasks that check the flow of reasoning are essential. Administration benefits from the automation of letters, summaries, and student services – provided confidentiality and a risk assessment (DPIA). Strategically, vendor dependency must be limited (open alternatives, versioning clauses, quality testing after updates). The conclusion: a lightweight governance system + "task-based" training and ongoing communication of results to scale what works.

Generative AI at the University of Warsaw – good practices

The conference grew out of a discussion of the conclusions from the DELab UW study, which can be found in the report "Generative AI at the University of Warsaw – Good Practices." We explored the use of generative AI tools at the University of Warsaw. We examined the practices of students, staff, and administration, collected case studies, analyzed syllabi and grading models, verified data flows and intellectual property issues, reviewed international literature and institutional policies, and assessed technical and ethical risks. Based on this, we developed recommendations that became the starting point for organizing the conference: moving from diagnosis to collaborative solution design and the exchange of best practices.