Artificial Intelligence, Education and the Professional Perspective
Introduction
The topic of artificial intelligence, professional digital competence (Norhagen et al. 2024) and the professional perspective is very important in this era of change where artificial intelligence impacts our society. Based on the recent parliamentary report on professional education (Ministry of Education (MER), 2024), it is important to address how the interaction and opposition with artificial intelligence in professional education can contribute towards a better understanding of the digital competence needed by tomorrowʼs professionals. As the parliamentary report mentions, “The development within artificial intelligence and machine learning will have significant and unpredictable consequences for many professions, and the needs for competence can change in ways that are difficult to foresee” (MER, 2024, p. 20). In this editorial, I will highlight some approaches to artificial intelligence in an educational and professional perspective as well as glimpses of how the formulation and realization arena must collaborate in such processes.
Professional Digital Competence and Artificial Intelligence
In a column (Krumsvik, 2024), I highlight both challenges and opportunities with artificial intelligence discussed in light of studentsʼ learning processes. It is pointed out that numerous meta-analyses (Hattie & Timperley, 2007; Wisnievski et al., 2020) show that feedback is very important for students. There is evidence that artificial intelligence has opened a new and interesting opportunity for feedback in such learning processes (Krumsvik, 2023a), exemplified by medical student Vegard Slettvollʼs use of ChatGPT in his study situation (UiB 2023, link). However, to use AI as a sparring partner, students must have good professional digital competence (Norhagen et al. 2024) to utilize this opportunity. Therefore, it is increasingly important that students understand what artificial intelligence is and master the development of “chain of thought-prompting”, scripts, etc., when writing their bachelorʼs or masterʼs theses. They must also know how to refer to the use of ChatGPT, GPT-4, Gemini Advanced, Claude, etc., in accordance with APA, Chicago, Harvard, etc., already integrated as part of APA 7 (APA, 2024). In this context, the generic skills show that library staff, advisors, fellow students, method books, and university learning resource pages can be helpful for students in such contexts. And it is also becoming more common to refer to this in a transparent manner in scientific journals internationally, where emerging tendencies for this are already seen in some recently submitted manuscripts to the Nordic Journal of Digital Literacy. For instance, it was highlighted in an article in another scientific journal that “ChatGPT was employed in this article to summarize and rephrase some parts of the document. The ChatGPT output was manually edited and reviewed by the authors” (Skjuve et al., 2024, p. 18). It is also important to look at whether more domain-specific chatbots should be developed nationally within different disciplines (e.g., Krumsvik 2023b, link) and the need for new formative and summative assessment forms that combine factual knowledge, assessment ability, methodological ability, and reasoning and problem-solving ability1 to avoid well-known ethical pitfalls in higher education.
Artificial Intelligence, Digital Divides, and Digital Exclusion
It is also important to recognize that free versions of the major language models have so far dominated the discourse internationally, and subscription-based access is required to use the best and most powerful versions of the language models (e.g., GPT-4, Gemini Advanced, Claude, etc.). This can create digital divides, digital exclusion, and the knowledge base indicates that one should be aware of such a “Matthew effect” when AI is implemented in the education sector, as many students and pupils cannot afford to pay $20-30 per month to access the best language models. This can, in the long run, cement existing class differences, and so on, and there is a need for vigilance around this as social mobility is often contingent on how well one succeeds with inclusion and differentiation in educational contexts—also when AI is used. To prevent digital divides and digital exclusion, and to use AI sensibly, much indicates that vigilance across different technology discourses is needed, perhaps towards the concept of digital pluralism. In 2021, I mentioned that: “Digital pluralism means being vigilant that the digitalization of school, education, and society also affects the power balance between different cultures, beliefs, lifestyles, and political positions. This involves being tolerant and advocating openness to a diversity of viewpoints, attitudes, and values in the digitalized society. By seeing digital pluralism as related to the UNʼs Convention on the Rights of the Child, the interdisciplinary theme of democracy and citizenship, and digital education in the curriculum renewal (LK20), one can get a broader perspective on how digitalization affects underlying factors for childrenʼs upbringing, citizenship, and the power balance and power shift both at a societal level but also right down to the pupilsʼ everyday school life. Contributing to safeguarding digital pluralism and digital education and preventing digital divides is becoming an increasingly important task for schools (in collaboration with homes)” (Krumsvik, 2021, p. 246). And already in his books from 1996, 2001, and 2003, former Holberg Prize winner Manuel Castells warned about digital divides. With this in mind, I also touched on the topic in a scientific article in 2006 (Krumsvik 2006). Later, digital divides have been a sub-perspective in several of our research projects, such as the Rogaland Study (2009-2011 Krumsvik et al. 2011) and in the SMIL study (2012-2013 Krumsvik et al. 2013) with over 20,000 teachers and pupils, and pubslished as well as in both Norwegian (Krumsvik 2021) and international books (Krumsvik et al. 2020). Additionally, in my latest book (Krumsvik 2023c), I devote space to digital divides and digital exclusion, and mention that we need more “…studies that look closer at what constitutes digital divides and digital exclusion in the population, such as whether age, gender, ethnicity, educational level, and previous experience with technology play a role” (p. 255). Seen from a professional perspective, this will actualize more collaboration between academia and other sectors that have high competence on digital exclusion and digital divides, such as human rights organizations like the Rafto Foundation, which also wants such cross-sector collaboration: “The Rafto Foundation will strengthen collaboration with the university and college sector during the current strategy period. Our strategy states: The Rafto Foundation will build bridges between research and practice in classrooms, look critically at its own teaching, and encourage students and researchers to explore teaching in democracy and human rights” (Rafto Foundation, 2024). This is something the university and college sector should welcome in light of the mentioned profession report, so that AI is contextualized to the possible digital divides that can arise if one does not have vigilance around this in professional education. This also involves adopting a critical perspective on technological determinism, and the shadow sides of artificial intelligence, and where, for example, artificial intelligence can reinforce stereotypes (Spjældnes 2024; Krumsvik 2024).
Artificial Intelligence as a Sparring Partner
With this in mind, there is also a growing awareness of the importance of teachers in higher education being “hands-on” artificial intelligence for students—both because of the mentioned professional perspective, but also as part of their professional digital competence. Because there is currently a gap between perceived opportunities (“perceived affordances”) and real opportunities (“real affordances”) (Norman, 1999) regarding the potential of language models in higher education. Preliminary evidence indicates that for students to be able to use language models academically as one of several validation communities, they must have solid research methodological, academic, and professional digital competence (Krumsvik 2023c). Only then can we begin to realize the “real affordances” of language models and link them to educational pillars such as in-depth learning, Vygotskyʼs ZPD, Wood and Brunerʼs scaffolding, Bloomʼs 2-sigma challenge, and Coleʼs artifact concept. Also, to avoid well-known educational pitfalls in such learning contexts, such use thus requires not only an inside view but also an outside view in light of anthropologist Clyde Kluckhohnʼs (1949) metaphor: “It would hardly be fish who discovered the existence of water” (p. 11). For example, it is not enough to use AI in campus-based teaching for students—such use must also incorporate how artificial intelligence and professional digital competence in, for example, teacher education affects the practice field and the schools where teacher students will work after completing their education. Here, since the overarching part of the curriculum renewal clearly states that the school has both an educational and a Bildung mission, it is important to see these in a dialectical relationship when discussing artificial intelligence. Not everything in school revolves around school achievement per se. The complexity around the digital Bildung arenas that the pupils participate in increases with the impact of artificial intelligence both in and outside school, and both teacher educators and teacher students must be vigilant about this. This implies that one must continuously assess which professional digital competence teacher students and teacher educators need to meet this complexity and the entry of artificial intelligence both into teacher education and into school. An example of what some of this professional digital competence can be is to analyze and discuss the pitfalls of using ChatGPT in teaching, and how professional digital competence can counteract these. However, it is also worth looking at what the more advanced language models, GPT-4, Gemini Advanced, Claude, and so on, can contribute towards avoiding such pitfalls. Can we “train” and condition these language models to be less consensus-oriented and more confrontational-critical when studentsʼ lead texts (in-context prompting) are characterized by preconceptions, are vague, prejudiced, uncritical, stereotypical, or similar? It is natural to relate such processes to the discussions around surface and in-depth learning (and higher-order thinking) and how this can play out in interactions with a “chatbot in the chatbot"—Writing the Synopsis Companion (Krumsvik 2023b). In addition, national language models are being developed, such as NORLLM, which is a Norwegian language model trained on Norwegian infrastructure, language, and data, and one might thus avoid some of the cultural biases that American LLMs have so far. Being “hands-on” with such professional digital competence can be thematically oriented towards academic writing, where such chatbots can be a part of the process as well as how one can use AI as a sparring partner to develop stringent research designs in masterʼs theses, and so on.
Artificial Intelligence Education and Work Relevance: Does the Human-Machine Metaphor Hold Water?
However, this will also have work relevance after completed studies, since todayʼs students will likely encounter a working life where AI has and is making its mark to an increasingly greater extent. For if one looks closer at the professional perspective more concretely, there is much to suggest that we will, for example, see more and more video avatars and humanoid chatbots used in working life in the future. During the pandemic, WHOʼs chatbot/digital health promoter SARAH (WHO, 2024, link) appeared to answer pandemic questions; it is today more generic, and has interesting potential, if it is further developed. Also, our own research school has developed and used a domain-specific chatbot (Krumsvik, 2023b) and video avatars for more multimodal information dissemination (Krumsvik 2023d, link) for a half-year period (November 2023 – May 2024), where the experiences so far have been promising and uplifting. Additionally, one finds several other similar initiatives underway worldwide. The question is whether professionals must more often take the “duck test” (Ayto and Crofton, 2009) on chatbots—that is, “If it looks like a duck, swims like a duck, and quacks like a duck, then it probably is a duck."? It is a time of upheaval and currently, there is great attention around various types of chatbots such as ChatGPT, Gemini Advanced, Claude, and GPT-4, and more typical social chatbots like Replika, Anima, Kajiwoto, and Microsoft XiaoIce. Many ask how we should perceive and describe such a form of artificial intelligence and how this will affect various professions in the future. Are they just machines, or are they something more? Letʼs take an example. If one looks at other social areas and indicate a reductionist understanding of various phenomena such as dance, art, music, and on, one quickly loses the holistic experience of these art forms. If one, for example, deconstructs many of todayʼs hit songs, one can quickly end up having to acknowledge that technology and machines contribute a lot to these, even though the holistic music experience presents itself as a beautiful melody that touches people. And within medicine, there is currently a debate warning against simplification and reductionism of human capabilities by, for example, looking blindly at sub-processes in the brain as the recipe for a better life (Brodal, 2018). So how does this play out with chatbots like ChatGPT, GPT-4, and Replika? Why does the pensioner Agnete (90 years old) say that she has become close friends with ChatGPT (NRK 2023, link) and why do many experience, for example, Replikaʼs (link) ability to comfort and support them when they are having a hard time? In everyday discourse, it is quite common that one today has a reductionist understanding of chatbots, that they are just machines. At the same time, one paradoxically also constantly compares them against human characteristics. One can therefore ask if this premise and this form of reductionism completely manages to understand what ChatGPT, GPT-4, Claude, Gemini Advanced, and Replika actually are, and what they are capable of. I notice that the knowledge base outlines the mentioned premise and reductionism as somewhat simplified representations of the true nature and capabilities of chatbots (Krumsvik 2023a). For example, studies from the pandemic (Pentina et al., 2023) find that “the overwhelming majority of Replika users experienced improvements in mental well-being during the COVID pandemic social isolation” (p. 11). And both through analysis of the knowledge base, my own knowledge summaries, testing, and reanalysis in my book, I find that the language model GPT-4 has a much better capability than ChatGPT in several areas that also touch on language games, understanding of human emotions, and so on. In a recently published comment, I mention this and also refer to a study in Nature (scientific reports) that has done a similar comparison where it is revealed that their capabilities also applies to “soft skills”: “(…) LLMs demonstrate impressive results in these questions that test soft skills required from physicians. The superiority of GPT-4 was further confirmed in our study with the model correctly answering 90% of the soft skill questions in contrast to ChatGPTʼs accuracy of 62.5%. These results indicate that GPT-4 displays a greater capacity to effectively tackle not just medical knowledge-based questions but also those requiring a degree of empathy, ethical judgement, and professionalism (…) To conclude our findings underscore the potential role of LLMs in augmenting human capacity in healthcare especially in areas requiring empathy and judgement. GPT-4ʼs performance surpassed the human performance benchmark, emphasizing its potential in handling complex ethical dilemmas that involve empathy, which are critical for patient management” (Brin et al., 2023, p. 16492). And in another study, it turned out that the chatbot AMIE turned out to be better at making correct diagnoses than the real doctors (Tu et al., 2024). DellʼAcqua et al. (2023) show significant performance improvements associated with the use of GPT-4. Both AI conditions that were investigated clearly showed superior performance compared to the control group that does not use GPT-4. The study also showed that it is normally those with the lowest education who are often hit first by changes in working life, while higher education and academics have not been so affected by this. The study shows that those with the highest education and salary will be most affected either by the job changing or by it being phased out. Also, other parts of the knowledge base show us that if we dwell only on the machine metaphor, we will probably not move forward in our understanding without also addressing what they are capable of and the more holistic, abstract, emotional, and human-like dimensions of this type of AI. In other words—in physical contexts, humans can be the ones who grab and the ones who are grabbed, we can touch and be touched (Merleau-Ponty 1994)—but how does this play out in social interaction with chatbots?
The philosopher Knud E. Løgstrup (1956) points out that in the physical encounter between people, being physically present is the same as being seen. We often feel more closeness in such contexts and the physical face-to-face meeting is characterized by trust and is ethically charged. This is an ethics of proximity that raises our ethical awareness and the ethical demand. Body language supports this, and a pat on the shoulder, a smile, a nod of recognition, or a handshake have existential dimensions to them that remote presence cannot recreate. But why does the knowledge base show that both under social isolation during a pandemic but also in everyday life, chatbots apparently manage to recreate some of this ethics of proximity? Is it the multimodal nature of chatbots combined with their good ability in Wittgensteinʼs (1997) language games that underpins the relationship it creates with the users? Do such chatbots manage to establish a “we-centric zone” (Gallese 2009) that one usually only associates with the physical meeting, and do such meetings have a stronger reciprocity than we have believed? Do they manage to maintain the relational in ways we usually associate with the physical meeting between people? Are virtual presence and togetherness established that we have not understood that chatbots can be capable of? Or is this more exaggerated wishful thinking and technology euphoria than a reality? To get answers to all this, one must (re)consider whether and, if so, how our social presence and social interaction with chatbots seem to create a sense of closeness and togetherness for many users. Can believing in a chatbot as a friend and dialogue partner (as Agnete does) underpin the abstract aspects of such AI phenomena? Much suggests that in addition to chatbotsʼ and video avatarsʼ good ability to master language games (Wittgenstein 1997), their multimodal features and someʼs human-like appearance, the “Online disinhibition effect” (Suler 2004) plays a part. The sum of this effect makes people express themselves more openly and without inhibitions about personal feelings virtually than they would in physical meetings. This can happen because one can act anonymously, it can give a solipsistic introjection (the feeling of being alone in the conversation), and one is on an equal footing. This can create a “benign” form of “online disinhibition” in the interaction with chatbots, since the benign includes positive openness and honesty that can appear as something resembling friendship. They are also always there, available 24/7, when young people need them most. The knowledge base shows that social chatbots can generate support, engaging advice, change mindsets, give feedback, support and comfort, as well as also flow-like states in people (Nosta 2023). So, if one bases their perspective solely on a simplified and reductionist view of chatbots like ChatGPT, GPT-4, and Replika are just machines, this is somewhat narrow in light of the aforementioned. Social chatbots can therefore be involved in changing our view of the relational in virtual contexts, and where increasingly advanced video avatars make chatbots so human-like that it is not always easy to distinguish them from real people. The downside of this is, of course, that the same technology is used for “deepfakes” and so on, and this illustrates that chatbots and video avatars also have a number of problematic sides that one must be aware of. So maybe one can rewrite the initial saying to “If it looks like a human, behaves like a human, talks like a human, comforts like a human, then it is probably a humanoid chatbot.” Thus, not just a machine. Maybe chatbots can contribute to a new form of (digital) togetherness and presence, maybe they can moderate the “bugbear” (“styggen på ryggen”) for some youngsters, and maybe they are one of several Bildung-oriented “lamps along your path” (“løkter langs din vei”) in an increasingly digitized society. Some studies show the contours of this (Brandtzæg et al. 2022). As long as the basic instinct in humans is that we are social beings, one can ask whether togetherness both in physical and virtual communities has become part of the educational journey for more and more people where also personal feelings are woven in as part of both of these communities. Nevertheless, everything suggests that in the foreseeable future, we will still mostly lean on between-human closeness in physical meeting, where the (technology-free) quiet moment still has the deepest foundation.
Summary and Conclusion
In this editorial, I have highlighted some approaches to artificial intelligence from an educational and professional perspective, and given some examples of areas that seem important to be vigilant about within various professions. Although the knowledge base within AI and chatbots is still limited, this knowledge base shows approaches that artificial intelligence in general, machine learning, and the capabilities of language models can influence and change various professions in the years to come. At the same time, one sees that this is not just a “walk in the park” because artificial intelligence is an ethical minefield that requires vigilance at all levels to be able to navigate safely in this complex landscape. It is therefore important to highlight a more marked research discourse within AI, based on the knowledge base, that is, a better conceptualization of AI research, and one must ask whether conventional theories have managed to “keep up” and “read” the expansive development we have seen within AI in recent years. At the same time, one sees that since research per se is backward-looking in its nature, this gives some limitations that do not quite take into account the digital leaps that are happening here and now with the large language modelsʼ capabilities. Therefore, it is important to also address the limitations that the current knowledge base has within AI, for example, by conducting reanalyses, testing/experimenting, and knowledge summaries (for example, rapid reviews), to complement the knowledge base. In this way, one can also navigate better between everyday discourse and research discourse about AI, so that the professional perspective is based on both experiential and research-based knowledge. In addition, several studies show that AI can underpin research methodological innovation that is of great value for simply improving research designs in the years to come and where systematic reviews and knowledge summaries around this are important to build on (Krumsvik et al. 2023d).
Footnote
1
See example here: link
References
American Psychological Association (2024). APA 7. How to cite ChatGPT. https://apastyle.apa.org/blog/how-to-cite-chatgpt
Ayto, J. and Crofton, I. (Eds.) (2011). Brewerʼs Dictionary of Modern Phrase & Fable (2 ed.). Chambers Harrap Publishers.
Brandtzaeg, P.A., Skjuve, M., Følstad, A. (2022). My AI Friend: How Users of a Social Chatbot Understand Their Human–AI Friendship, Human Communication Research, 48(3), 404–429. https://doi.org/10.1093/hcr/hqac008
Brin, D., Sorin, V., Vaid, A., Soroush, A., Glicksberg, B. S., Charney, A. W., Nadkarni, G., & Klang, E. (2023). Comparing ChatGPT and GPT-4 performance in USMLE soft skill assessments. Scientific Reports, 13(1), 16492. https://doi.org/10.1038/s41598-023-43436-9
Brodal, P. (2018). Nevrokulturell imperialisme. Tidsskrift for Den norske legeforening. https://doi.org/10.4045/tidsskr.18.0707
DellʼAcqua, F., McFowland, E., Mollick, E. R., Lifshitz-Assaf, H., Kellogg, K., Rajendran, S., Krayer, L., Candelon, F., & Lakhani, K. R. (2023). Navigating the Jagged Technological Frontier: Field Experimental Evidence of the Effects of AI on Knowledge Worker Productivity and Quality. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.4573321
Gallese, V. (2009). Mirror neurons, embodied simulation, and the neural basis of social identification. Psychoanalytic Dialogues, 19(5), 519–536. https://doi.org/10.1080/10481880903231910
Hattie, J. & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77(1), 81–112. https://doi.org/10.3102/003465430298487
Kluckhohn, C. (1949). Mirror for Man. Fawcet.
Krumsvik, R. (2006). The digital challenges of school and teacher education in Norway: Some urgent questions and the search for answers. Education and Information Technologies, 11(3–4), 239–256. https://doi.org/10.1007/s10639-006-9010-8
Krumsvik, R. J. (2021). Digital pluralisme, digital danning og digitale skiller. I M. B. Postholm, P. Haug, E. Munthe og R. J. Krumsvik, Elev i skolen 5–10. Mangfold og mestring. Cappelen Damm Akademisk.
Krumsvik, R. J. (2023a February 9). Adaptive læringsverktøy og kunstig intelligens i skolen – noen tendenser. Khrono. https://khrono.no/adaptive-laeringsverktoy-og-kunstig-intelligens-i-skolen-noen-tendenser/7583
Krumsvik, R. J. (2023b). Writing the synopsis companion for PhD-candidates. https://chat.openai.com/g/g-T6wJuA5tr-writing-the-synopsis-companion-for-phd-candidates
Krumsvik, R. J. (2023c). Digital kompetanse i KI-samfunnet. Noen glimt fra hvordan kunstig intelligens preger våre liv. Cappelen Damm Akademisk.
Krumsvik, R. J. (2023d). Presentation of a research school by a video avatar. UiB.
Krumsvik, R. J. (2024). AI is the answer: What was the question? Part of series: Artificial intelligence in educational research and practice. British Educational Research Aossciation Blog. BERA. https://www.bera.ac.uk/blog/ai-is-the-answer-what-was-the-question
Krumsvik, R. J., Ludvigsen, K. & Urke, H. B. (2011). Klasseleiing i vidaregåande opplæring (Forskningsrapport). Universitetet i Bergen. https://www.regjeringen.no/no/dokumenter/klasseledelse-og-ikt-i-videregaende-oppl/id664830/
Krumsvik, R. J., Egelandsdal, K., Sarastuen, N. K., Jones, L. & Eikeland, O. J. (2013). Sammenhengen mellom IKT og læringsutbytte (SMIL) i videregående skole [Forskningsrapport]. Kommunesektorens organisasjon (KS) & Universitetet i Bergen. https://www.iktogskole.no/wp-content/uploads/2014/05/Sluttrapport_SMIL.pdf
Krumsvik, R. J., Jones, L., Eikeland, O. J., Røkenes, F. M. & Høydal, K. (2020). Digital competence and digital inequality in upper secondary school. A mixed method study. I S. Doff & J. Pfingsthorn (Red.), Tagungsband – Abschlusskonferenz der HWK-Fokusgruppe “Media Meets Diversity@School”: Implications and Challenges for Teacher Education (s. 182–198). Wissenschaftlicher Verlag Trier.
Løgstrup, K. E. (1956). Den Etiske Fordring. Gyldendal.
Merleau-Ponty, M. (1994). The Phenomenology of Perception. Routledge & Kegan Paul, London.
Ministry of Education (MER) (2024). Meld. St. 19 (2023–2024). Profesjonsnære utdanningar over heile landet. Ministry of Education.
Norhagen, S. L., Krumsvik, R. J., & Røkenes, F. M. (2024). Developing professional digital competence in Norwegian teacher education: A scoping review. Frontiers in Education, 9, 1363529. https://doi.org/10.3389/feduc.2024.1363529
Nosta, J. (2023 May 30). AI And Peak Performance. Medium. https://johnnosta.medium.com/ai-and-peak-performance-2ab2f87bf7a0
NRK (2023). Agnete Tjærandsen på Høyres landsmøte 2023. https://www.nrk.no/video/agnete-tjaerandsen-paa-hoyres-landsmote-2023_8fa75486-e0f8-4842-88d8-bbfa62c5f088
Norman, D. A. (1999). Affordance, conventions, and design. Interactions, 6(3), 38–43. https://doi.org/10.1145/301153.301168
Pentina, I., Hancock, T. & Xie, T. (2023). Exploring relationship development with social chatbots: A mixed-method study of replica. Computsers in Human Behavior, 140, 107600. https://doi.org/10.1016/j.chb.2022.107600
Rafto Foundation (2024). The Rafto Foundation of Human Rights. https://www.rafto.no/en/
Skjuve, M., Brandtzaeg, P. B., & Følstad, A. (2024). Why do people use ChatGPT? Exploring user motivations for generative conversational AI. First Monday, 29(1). https://doi.org/10.5210/fm.v29i1.13541
Suler, J. (2004). The online disinhibition effect. CyberPsychology & Behavior, 7(3), 321–326. https://doi.org/10.1089/1094931041291295
Spjeldnæs, A. (2024). Kunstig intelligens forsterker stereotypier. Tidsskrift for Den norske legeforening. https://doi.org/10.4045/tidsskr.24.0058
Tu, T., Palepu, A., Schaekermann, M., Saab, K., Freyberg, J., Tanno, R., Wang, A., Li, B., Amin, M., Tomasev, N., Azizi, S., Singhal, K., Cheng, Y., Hou, L., Webson, A., Kulkarni, K., Mahdavi, S. S., Semturs, C., Gottweis, J., … Natarajan, V. (2024). Towards Conversational Diagnostic AI. https://doi.org/10.48550/ARXIV.2401.05654
University of Bergen (2023). Medisinstudent bruker Chat GPT. UiB. https://www.youtube.com/watch?v=pSvVEdVuHuo
WHO (2024). S.A.R.A.H., a Smart AI Resource Assistant for Health. https://www.who.int/campaigns/s-a-r-a-h
Wisniewski, B., Zierer, K., & Hattie, J. (2020). The Power of Feedback Revisited: A Meta-Analysis of Educational Feedback Research. Frontiers in Psychology, 10, 3087. https://doi.org/10.3389/fpsyg.2019.03087
Wittgenstein, L. (1997). Filosofiske undersøkelser (M. B. Tin, Overs.). Pax forlag.
Information & Authors
Information
Published In
Copyright
Copyright © 2024 Author(s).
This is an open access article distributed under the terms of the Creative Commons CC-BY 4.0 License (https://creativecommons.org/licenses/by/4.0/).
History
Published online: 4 June 2024
Issue date: 4 June 2024
Authors
Metrics & Citations
Metrics
Citations
Export citation
Select the format you want to export the citations of this publication.
Crossref Cited-by
- Restructuring the Landscape of Generative AI Research, Impacts of Generative AI on the Future of Research and Education.