Top 5 Challenges Facing Use of AI in Universities in 2025

Niall MacGiolla Bhuí
Feb 24, 2025By Niall MacGiolla Bhuí

The Rise of Artificial Intelligence in Academia

The integration of Artificial Intelligence (AI) in academia is rapidly transforming the educational landscape. Artificial intelligence is no longer a distant promise; it is a dynamic, transformative force. By enhancing personalised learning, streamlining research, and automating administrative tasks, AI offers tremendous opportunities to improve educational outcomes. At the same time, the ethical challenges it poses—particularly around academic integrity and fairness—require thoughtful policies and ongoing dialogue among all stakeholders. As academia navigates this new terrain, embracing AI’s benefits while mitigating its risks will be essential for building a future where technology and human ingenuity go hand in hand.

By the end 2025, university students will find AI tools seamlessly woven into their learning experiences, enhancing research capabilities and providing personalised study aids. However, with these advancements come a host of challenges that students must navigate to maximise the benefits of AI while mitigating its drawbacks.

One of the most visible impacts of AI in academia is in the realm of teaching. Institutions are increasingly incorporating generative AI tools—such as ChatGPT and adaptive tutoring systems—to personalise instruction and enhance student engagement. For example, platforms now analyse individual learning patterns to tailor content and provide real‑time feedback (Using AI in the Higher Education Classroom, n.d.). These innovations have opened up new opportunities for creating flexible, learner-centered environments, where students can benefit from customised lesson plans and interactive virtual tutors (Beck & Levine, 2023).

However, while these tools can boost productivity and support differentiated instruction, educators must also navigate the challenges of ensuring academic integrity and maintaining a human touch in the learning process. As instructors/tutors/lecturers begin to integrate AI into their curricula, professional development in AI literacy is becoming increasingly important (Mello et al., 2023).

ai classroom

Challenge 1: Data Privacy Concerns

As artificial intelligence (AI) becomes an integral tool in higher education—powering everything from adaptive learning systems to streamlined administrative processes—it also raises critical questions about data privacy. With universities increasingly relying on AI-driven platforms to personalise instruction and support research, safeguarding sensitive personal data has become a top priority.

AI applications in education require access to large datasets that often include detailed personal information about students, faculty, and staff (Higher Ed Dive, 2024). These systems analyse learning patterns, track academic performance, and even monitor online engagement. However, without robust security measures and transparent data practices, there is a significant risk of unauthorised data access, breaches, or misuse. Many institutions find that their current data protection policies are not fully aligned with the rapid pace of AI innovation, potentially leaving gaps in privacy safeguards (Ifenthaler, 2024).

One pressing concern is the issue of informed consent. Students and staff may not always be aware of how their data is being collected or used by AI systems, raising ethical questions about transparency and autonomy. Furthermore, the integration of AI tools can lead to unintended consequences if sensitive data is shared with third parties or exploited for purposes beyond educational enhancement (Artificial Intelligence in Education, 2025).

To mitigate these risks, third-level institutions must adopt a multifaceted approach. This includes implementing rigorous cybersecurity protocols, establishing clear and transparent data usage policies, and ensuring regular audits of AI systems. Compliance with regulations such as the European Union’s General Data Protection Regulation (GDPR) is crucial for protecting individual rights and maintaining public trust (European Commission, 2023). Additionally, investing in AI literacy programmes for both educators and students can empower the academic community to understand the implications of data collection and advocate for stronger privacy protections.

Challenge 2: Dependence on Technology

As students increasingly rely on AI tools for studying, there is a risk of becoming overly dependent on technology. Research is already identifying that this dependence could well hinder the development of critical thinking and problem-solving skills, which are in high demand with employers. Students may find themselves overly relying on AI for answers rather than engaging deeply with the material or developing their analytical abilities. Anecdotally, I am hearing from many colleagues in different disciplines (medicine, engineeering and law to name but threee) that this is a significant concern. 

AI Artificial Intelligence. Digital brains. Quantum computing

Challenge 3: Ethical Implications

The use of AI in academia also raises significant ethical concerns. Issues such as bias in AI algorithms, fairness in automated grading systems, and the authenticity of student work are critical areas that need addressing. Students will have to grapple with these ethical dilemmas, ensuring that the use of AI aligns with academic integrity and fairness. But, what are these? The ground is shifting is quickly and so continually that this is a particular challenge.

AI’s influence extends far beyond the classroom and is full of 'known unknowns.' In research, AI tools are revolutionising literature reviews, data analysis, and even the initial drafting of scholarly articles. Researchers now use AI to quickly scan vast databases, generate novel research ideas, and even assist in writing complex academic texts (Lund et al., 2023). Such tools can save valuable time and help academics explore interdisciplinary connections that might otherwise be overlooked. But...

With AI’s expanding role in academic work, ethical concerns have come to the forefront. Issues such as plagiarism, authorship, and data privacy are central to debates about AI’s proper use in education. Recent reports indicate that some students have faced severe consequences for using AI in ways that were not clearly defined by institutional policies (People, 2024). Moreover, the reliability of AI detection tools is under scrutiny—studies show that these systems can produce false positives, sometimes disadvantaging non-native speakers or students with different writing styles (Scottish Universities Catch 400 Pupils Cheating with AI, 2024).

These challenges call for the development of clear guidelines and robust policies that balance innovation with fairness. Academic institutions must ensure that students and staff are both equipped to use AI responsibly and held to consistent ethical standards.

Challenge 4: Skill Gap

The rapid evolution of AI technology means that students must continuously update their skills to stay relevant in a competitive job market. There is a growing demand for proficiency in AI-related skills across various industries. Universities will need to adapt their curricula to equip students with the necessary knowledge and skills to thrive in an AI-driven world. As we all know, universities are typically slow to respond to and embrace change. 

Research is noting that the rise of AI, alongside automation and digital transformation, is fundamentally reshaping the job market, creating a dual challenge for employers and employees: the demand for new skills and the need to reskill the existing workforce. One of the primary skills gaps is in the area of AI and data literacy. Many current job roles, regardless of industry, now require at least a basic understanding of how AI works, how to interact with intelligent systems, and how to analyse data. Failure to provide workers with AI-related competencies poses a serious risk to organisations aiming to stay competitive in today's digital economy (Bessen, 2023). For instance, roles that did not previously focus on technology, such as customer service representatives or administrative assistants, are increasingly expected to work alongside AI tools to automate manual tasks and provide actionable insights (Deloitte, 2023). This shift in worker expectations presents a significant skills gap, with most workers feeling unprepared or underqualified in the face of AI integration (IBM Skills Academy, 2024).

Moreover, AI’s rapid development in fields such as machine learning, natural language processing, and robotics has made it difficult for educational programmes to keep pace. Traditional hiring practices are also being stretched, as many companies seek hybrid skill sets that blend technical expertise with domain-specific knowledge, requiring workers to juggle diverse technical proficiencies (World Economic Forum, 2024). Additionally, soft skills like critical thinking, problem-solving, and emotional intelligence are more important than ever, as AI cannot replicate human creativity or nuanced decision-making (McKinsey & Company, 2023).

To address these gaps, there must be a united effort between educational systems, employers, and employees. Employers must invest in upskilling programs that provide their workers with the tools they need to succeed. Likewise, governments and educational institutions have a responsibility to update curricula and vocational training to better prepare future generations.

 The rise of AI in the workplace is a wake-up call for a skills revolution. If companies and workers fail to adapt, they risk falling behind in the rapidly changing job market. By closing the skills gap through meaningful investment in education and training, the workforce can grow more resilient and future-ready.

Artificial intelligence is no longer a niche technology confined to tech giants—it’s rapidly permeating every industry and reshaping the modern workplace. With AI tools such as ChatGPT and Microsoft Copilot now integrated into everyday business operations, employers across all sectors are facing a growing skills gap. Many employees simply aren’t equipped with the digital literacy and specialized training needed to harness these powerful technologies effectively (The Wall Street Journal, 2024).

As AI becomes a core element of productivity, routine tasks—from data analysis to customer service—are increasingly automated. This shift not only promises efficiency gains but also demands that workers adapt to new roles where human oversight and creativity complement machine capabilities. However, surveys indicate that while job postings now often list “AI proficiency” as a desirable skill, a significant portion of the current workforce lacks the training to use these systems proficiently (The Wall Street Journal, 2024). This mismatch poses a dual challenge: businesses must invest in upskilling their employees, and educational institutions need to revise curricula to prepare graduates for an AI-driven future.

Universities are already beginning to act on this imperative. Many are introducing specialized courses and even entire degree programs centered on AI and machine learning, aiming to fill the talent gap before students even enter the workforce (Higher Ed Dive, 2024). Yet, for those already employed, the need for continuous learning is critical. Employers are increasingly offering professional development programs that focus on practical AI skills, such as prompt engineering, ethical data use, and integrating AI tools into daily workflows. Without such initiatives, companies risk falling behind in a competitive market where AI expertise can be a decisive factor for success.

The urgency to bridge the AI skills gap is clear: as technology continues to evolve, so too must our workforce. By investing in comprehensive training programmes and adapting educational frameworks, both businesses and institutions can ensure that employees are not left behind in the AI revolution. Ultimately, a well-prepared workforce will not only drive innovation but also safeguard against the risks of technological displacement.

artificial intelligence background concept

Challenge 5: Accessibility Issues

While AI has the potential to make education more accessible, it could also widen the gap for students with limited access to technology. Ensuring that all students have equal opportunities to benefit from AI-enhanced learning tools will be crucial over the next decades. Universities must work towards bridging the digital divide to prevent further inequality in educational access.

Issues could include user interface challenges or the compatibility of AI tools with assistive technologies. Many AI tools aren’t designed with students with disabilities in mind, leading to compatibility issues with assistive devices like screen readers. AI systems might also exclude accessible content, which could result in bias or inaccessible outputs. Universities must prioritize universal design principles to ensure equitable access.

Concluding Remarks

While AI presents exciting opportunities for enhancing education, it also brings a set of challenges that we will all need to navigate by 2025.  The future of AI in academia is both promising and complex. As AI technologies become more sophisticated, they will continue to drive efficiency in administrative tasks, research, and teaching. Educational institutions are already experimenting with new AI-driven programmes and interdisciplinary minors designed to prepare students for a job market where AI literacy is as essential as traditional computer skills (The Wall Street Journal, 2024).

Yet, the evolution of AI also demands a new level of critical engagement. Students and educators alike will need to develop not only technical skills but also a critical understanding of how AI systems work—and where they might fail. Embracing AI’s potential while safeguarding academic values will be the key to a sustainable and equitable future in education (Kamalov, Santandreu Calong, & Gurrib, 2023).

Despite the many advantages of AI, there is a growing awareness of the pitfalls of over-reliance on AI. The phenomenon of “hallucinations”—where AI generates plausible but incorrect information—underscores the need for rigorous verification and critical analysis of AI-generated content (Kamalov, Santandreu Calong, & Gurrib, 2023). 


References

Artificial Intelligence in Education. (2025, February 24). Wikipedia. Retrieved from https://en.wikipedia.org/wiki/Artificial_intelligence_in_education

Beck, S. W., & Levine, S. R. (2023). Backtalk: ChatGPT: A powerful technology tool for writing instruction. Phi Delta Kappan, 105(1), 66–67.

Bessen, J. E. (2023). AI and the workforce: The challenge of skills gaps. Brookings Institution. Retrieved from https://www.brookings.edu/ai-and-the-workforceDeloitte. (2023). The future of work in the AI era.

Deloitte Insights. Retrieved from https://www2.deloitte.com/global/en/insights/future-of-work-ai 

European Commission. (2023, August 6). EU AI Act: First regulation on artificial intelligence. Retrieved from https://ec.europa.eu/digital-single-market/en/news/eu-ai-act-first-regulation-artificial-intelligenceHigher Ed Dive. (2024, August 5). Empowering higher education with artificial intelligence. Retrieved from https://www.highereddive.com/spons/empowering-higher-education-with-artificial-intelligence/714549

Higher Ed Dive. (2024, August 5). Empowering higher education with artificial intelligence. Retrieved from https://www.highereddive.com/spons/empowering-higher-education-with-artificial-intelligence/714549

 IBM Skills Academy. (2024). AI and skills development: Preparing the workforce for the future. IBM Skills. Retrieved from https://www.ibm.com/skills/ai

Ifenthaler, D. (2024). Artificial intelligence in education: Implications for data privacy and security. Journal of Educational Technology & Society, 27(1), 42–55.

Kamalov, F., Santandreu Calong, D., & Gurrib, I. (2023). New era of artificial intelligence in education: Towards a sustainable multifaceted revolution. arXiv. https://arxiv.org/abs/2305.18303

Lund, B., Wang, T., Mannuru, N. R., Nie, B., Shimray, S., & Wang, Z. (2023). ChatGPT and a new academic reality: Artificial intelligence‑written research papers and the ethics of the large language models in scholarly publishing. arXiv. https://arxiv.org/abs/2303.13367

McKinsey & Company. (2023). The evolving role of human skills in the age of AI. *McKinsey Report*. Retrieved from https://www.mckinsey.com/evolving-trends-human-skills-ai 

Mello, R. F., Freitas, E., Pereira, F. D., Cabral, L., Tedesco, P., & Ramalho, G. (2023). Education in the age of generative AI: Context and recent developments. arXiv. https://arxiv.org/abs/2309.12332

People. (2024, October 16). Parents sue school after son gets punished for using AI on class project, insist ‘it wasn’t cheating’. People. https://people.com/parents-sue-school-after-son-gets-punished-for-using-ai-on-class-project-8729032

Scottish universities catch 400 pupils cheating with AI. (2024, August 4). The Times. https://www.thetimes.co.uk/article/scottish-universities-catch-400-pupils-cheating-with-ai-58t9bxm0q

The Wall Street Journal. (2024, August 5). Colleges race to ready students for the AI workplace. The Wall Street Journal. https://www.wsj.com/us-news/education/colleges-race-to-ready-students-for-the-ai-workplace-cc936e5b 

World Economic Forum. (2024, June 9). Skills for the future: Closing the gap in AI training. World Economic Forum. Retrieved from https://www.weforum.org/skills-gap-ai for 42 seconds
Bridging the AI Skills Gap in Today’s Job Market

Using AI in the higher education classroom. (n.d.). UNT Digital Strategy. Retrieved February 24, 2025, from https://digitalstrategy.unt.edu/clear/teaching-resources/theory-practice/using-ai-in-higher-education-classroom.html