DCU Position Statement on the use of Artificial Intelligence tools

Context

Developments in Artificial Intelligence (AI) present both powerful opportunities and daunting challenges to education and to society more broadly. We have only experienced the very early stages of its impact and the pace at which tools are developing is very rapid. However, the EU Artificial Intelligence Act, which came into law in Aug ‘24, provides a legal framework for the deployment of AI within Europe, and there is a growing body of international guidance to draw on in terms of good practice. 

The technologies of AI and the study of the impact of AI are research strengths in DCU. AI has also been the subject of extensive consideration at institutional and local level internally, touching on all of our key activities. We have developed insight into our collective understanding, sentiment and use of AI through a staff survey, introduced guidance for critical aspects of teaching and research, reviewed our curriculum and have invested in some pilot activities so we can monitor and evaluate, as we build up our knowledge about AI. 

It is timely that we adopt a clear statement on our position on the use of Artificial Intelligence tools in the university. This will be a living document and will evolve as our experience,  knowledge and insight develop across a period of very rapid technological change. 

 

Narrow Icon

As a university, 

  1. We recognise that AI presents very significant opportunities and can have a very positive impact on our research, our teaching and the efficiency and efficacy of our operations at the level of individual, team and organisation.
  2. We have a leadership role to play in AI research (technology as well as societal,  economic and environmental dimensions) and in informing policy and the public on the implications of AI. 
  3. We have a duty to protect the integrity of our research, the academic integrity of our awards and to maintain curricula in a way which prepares for an unscripted future which includes AI.
  4. We recognise the need to provide up to date and clear guidance on the appropriate use of AI, and develop institutional capability in terms of knowledge and skills relating to AI.

 

Principles underpinning Artificial Intelligence in DCU

  • We contribute to developments in AI, in the ethical, equitable, sustainable and inclusive adoption of AI in society and in informing and educating the public about AI.
  • Our use of AI in the university is in line with our values and the EU AI Act, is human-centred and serves our students and our strategy. 
  • We foster trust and confidence among DCU students and staff in AI technology through the provision of supports, appropriately governing the adoption of AI in the university, managing risks, acting legally, ethically and transparently and actively protecting equity, inclusion and the environment. 

Risks arising for generative AI in particular include a threat to academic integrity, and the risk of disclosure of personal, commercially sensitive, strategically important information, or of confidential, incomplete or draft versions of data being inappropriately (and probably inadvertently) shared. Ethical considerations include the protection of personal data, climate impact, intellectual property and avoidance of bias.

 

Key responsibilities underpinning staff use of Artificial Intelligence tools at DCU

  1. Become, and stay, well-informed
  2. Act legally, ethically and transparently at all times
  3. Protect confidential, copyrighted, institutional and personal data
  4. Protect your own and students work 

The practical implication of each of these responsibilities is elaborated upon below.

 

As our position evolves, DCU will:

  • Develop a Policy on AI use and set up mechanisms to ensure that we appropriately and coherently govern adoption of AI tools in line with a defined risk tolerance.
  • Continue to provide and develop training and guidance applicable to aspects of teaching, research and business operations through units including the Teaching Enhancement Unit, Graduate Studies, the Research Office, and DCU People Learning & Organisational Development.
  • Identify what tools are being used and what are available, and build a whitelist indicating what kind of data is appropriate in each case. Regular review of emerging tools as well as re-assessing existing whitelisted tools will be necessary.
  • Develop a plan, coherent with an overarching digital strategy relating to investment in AI tools and pilot initiatives.

 

Narrow Icon

Translating into Practice

Key responsibilities underpinning staff use of Artificial Intelligence tools at DCU 

 

  1. Become and stay well informed

Actively engage with opportunities to develop or deepen your understanding of AI in your academic or professional area. All staff should take the DCU Introduction to AI Literacy course, designed by our internal team of academic experts in the field of AI, which supports our compliance with the new EU Artificial Intelligence Act. LinkedIn Learning, sector or disciplinary reports provide other rich sources of information.

Remain up to date on relevant guidance as it emerges and evolves nationally, internationally1 and locally. Pay particular attention to those from research funders and publishers. DCU Guidelines such as those issued by the research Ethics Committee or the TEU  will be reviewed frequently and updated as needed to reflect changes in technology, regulations, and institutional priorities.

1 Examples include: Generative Artificial Intelligance: Guidelines for Educators, QQI 2023 and ERA Living Guidelines on the Responsible Use of AI in Research

 

2  Act legally, ethically and transparently at all times in the use of AI.

Keep in mind that you are fully responsible for your use or deployment of AI and responsible for acting with professionalism and accountable for any harm or misuse (even if unintended). Be transparent and document your use of AI. The use of generative AI for any University related research, scholarship or work, should be clearly disclosed to stakeholders and impacted parties or individuals.

Consider that generative AI-produced content may be biased, fabricated, inaccurate, or contain copyright-protected or proprietary information and will require human-review before use. It is not acceptable to use AI tools to create deep fakes or generate misleading content or to promote discrimination, bias, or harm.

 

3. Protect confidential, copyrighted, Institutional and personal data.

Be alert to the particular risk with AI of inadvertently sharing data inappropriately or illegally. Your use of AI must be in compliance with existing University policies, which include:

  • Data Privacy policy (GDPR)
  • Digital Systems and Cloud Services policy
  • Intellectual Property policy

Ensure that you do not share any content that compromises an individual’s privacy, confidential university processes, DCU’s commercial, or strategic interests. If you do not know if it is safe to share DCU data with a specific AI tool, DO NOT. 

DCU data includes content from email accounts and drives, recordings or transcripts of DCU meetings, draft papers, books, book chapters, theses, or any document not officially published via our web pages or on DORAS. The use of Personal Data as an input to a GenAI tool is likely to constitute a data breach under GDPR, unless this action received explicit informed consent.

Data can be shared in a range of ways including:

  • Information entered via prompts
  • All uploaded material/data
  • Personal identifying information such as names or email address
  • Other background information you provide to improve personalization

AI product providers vary in their approaches to data protection, retention, and usage of your data. Entering information into AI tools is similar to posting that information on a public website. You have to assume that any information you input to, or give access to these tools is no longer private and may be used for training the model, sold on to 3rd parties, accessed by governments or made publicly available.  Open source European tools are a safer approach than using non-EU owned tools, which are very likely not GDPR compliant.

AI tools do not offer the 'right to forget' which means that once data is uploaded / entered - there is no way to delete it which can lead to loss of data ownership.

It is intended to build up a whitelist of tools that have been subject to scrutiny in terms of data privacy and licensing agreement conditions, in order to approve them for potential use with specific categories of DCU data. The approach will include a mechanism for colleagues to propose AI tools for review, and regular re-visiting of approvals with software upgrades or licensing changes. As with all IT investment, central licenses and quality controls will be important. Note that whitelisting will not supersede decisions to deploy an AI tool for a particular purpose, given that many applications of AI will involve important additional considerations.  

Under the University's existing enterprise licences, MS Co-pilot2, Google Gemini AI and Zoom AI Companion suites are available to all staff. Login with your DCU identity. These systems are configured to be GDPR compliant, all data entered remains within the existing DCU data privacy agreements and no data is shared with any of these systems training models. No DCU data should be entered into any other AI tools.

2 This is distinct from Microsoft 365 Co-Pilot for which we have not entered into a licencing agreement.

 

4. Protect your own and student’s work 

Protect your own intellectual property. Exposing information to an AI tool could result in lost ownership. 

Do not use AI detection software to review student’s work for plagiarism. Such tools are currently not particularly effective, but more importantly once submitted to an AI tool, the work can be harvested by the AI platform without student’s permission. 

 

Published: 07/03/2025

Back to top