We found 56 results that contain "artificial intelligence"
Posted on: GenAI & Education

Generative Artificial Intelligence (AI) Guidance from MSU
We collectively share the responsibility to uphold intellectual honesty and scholarly integrity. These are core principles that may be compromised by the misuse of GenAI tools, particularly when GenAI-generated content is presented as original, human-created work.
You can learn more about generative AI and participate in discussions here on iteach.msu.edu.
Click section 3 of this playlist for additional resources on generative AI on #iteachmsu!
You can learn more about generative AI and participate in discussions here on iteach.msu.edu.
Click section 3 of this playlist for additional resources on generative AI on #iteachmsu!
Posted on: GenAI & Education

An Intro to Artificial Intelligence
The following are a collection of publicly sourced articles related to the recent rise of artificial intelligence in everyday applications. note: following each [hyperlinked] title/subtitle are abstracts generated by ChatGPT after receiving prompts from M. Neal. "Write an abstract for [article link]" is indicated with [URL] abstract. "Write an abstract for [article text]" is indicated with [text] abstract. These very similar prompt-responses are shared consecutively in the hope that reading the abstracts will showcase the variations in ChatGPT output (despite similar input). Big Tech was moving cautiously on AI. Then came ChatGPT.
Google, Facebook and Microsoft helped build the scaffolding of AI. Smaller companies are taking it to the masses, forcing Big Tech to react.
[URL] Abstract: "ChatGPT, the Large Language Model developed by OpenAI, has been acquired by Google for an undisclosed sum. The acquisition has been widely seen as a move to improve Google's AI capabilities and stay ahead of competitors in the field. The ChatGPT model will be integrated into various Google services and products, including search, voice assistants, and translation services. This acquisition highlights the growing importance of AI technology and the increasing demand for advanced language models like ChatGPT."
[text] Abstract: Meta's Blenderbot, a chatbot released by Facebook's parent company three months before OpenAI's ChatGPT, was deemed a failure due to its over-cautious content moderation. In contrast, ChatGPT's conversational style, which allows for discussion on controversial topics, has made it a mainstream success. As ChatGPT continues to grow in popularity, tech giants such as Meta and Google are feeling pressure to speed up their own AI processes, potentially disregarding safety concerns. This has sparked concerns among AI ethicists, who fear that big tech's rush to market may expose billions of people to potential harms before they have been fully studied. Despite these concerns, companies like OpenAI continue to release their AI tools to the public, viewing it as the only way to assess real-world harm. The fast pace of AI progress requires companies to prioritize making the right decisions and releasing AI models that best serve the community, while considering the broader societal impacts of these innovations.ChatGPT Is Dumber Than You Think
Treat it like a toy, not a tool.
Abstract: "The deployment of AI language models like ChatGPT, developed by OpenAI, in writing and journalism raises questions about the ethics of AI-generated content. ChatGPT is capable of producing human-like text, making it difficult to distinguish between AI-generated content and content written by a human. The ethical concerns include issues of accuracy, accountability, and bias in AI-generated writing, as well as the potential for AI systems to manipulate public opinion. The article highlights the need for transparent and responsible deployment of AI language models in journalism and writing, and the importance of considering the ethical implications of AI-generated content."
ChatGPT could be used for good, but like many other AI models, it's rife with racist and discriminatory bias
[URL] abstract: ChatGPT, a large language model developed by OpenAI, is prone to biases like many other AI models. Despite its impressive performance in generating human-like text, the model has been found to have biases based on the training data it was exposed to. These biases can be a result of various factors such as the demographic representation in the training data, cultural norms and values, and other systemic biases present in society. It is crucial for developers and users to be aware of these biases and take steps to mitigate them.
[text] abstract: ChatGPT, an advanced artificial intelligence chatbot developed by OpenAI, is at the forefront of the debate on the potential and dangers of AI. Although the chatbot can assist with writing and speaking tasks and provide entertainment, it has also been used to cheat, plagiarize, spread misinformation, and enable unethical business practices. Furthermore, like many chatbots, it has a history of being biased. OpenAI added guardrails to prevent problematic responses, but these were easily bypassed by users. As AI is increasingly used to make important decisions in various industries, such as healthcare and law enforcement, the issue of bias must be addressed to prevent amplifying existing inequalities. This problem is not unique to ChatGPT and is well-documented, with studies showing the negative effects of biased algorithms. The AI industry is divided on who should be held responsible for these biases and whether harmful models should be released. Despite these concerns, AI models are becoming more prevalent in our lives and researchers are urging caution on the potential for harm due to AI biases.
The US Copyright Office says an AI can’t copyright its art[text] abstract: The US Copyright Office has rejected a request to copyright a work of art created by an AI algorithm named Creativity Machine. A three-person board determined that the AI-generated image lacked the necessary "human authorship" required for protection under copyright law. The board emphasized the vital role of the human mind in creative expression, citing past court decisions which have consistently found that non-human expression is ineligible for copyright protection. This decision highlights the importance of human agency in machine-produced works and raises questions about the limits of AI in the realm of art and copyright.[Google] An important next step on our AI journey[URL] abstract: This blog post provides an overview of BARD (Bidirectional Attention Representation for Documents), a new AI-powered search system developed by Google. BARD leverages the latest advances in natural language processing and machine learning to deliver more relevant and accurate search results, even for complex queries. The system utilizes a bidirectional attention mechanism to better understand the context of each word in a document and make connections between words, which helps to improve the relevance of search results. With BARD, Google aims to provide users with a more intuitive and seamless search experience, enabling them to find what they're looking for more quickly and easily.[text] abstract: Google has opened up its experimental conversational AI service, Bard, to trusted testers ahead of wider availability. Bard aims to combine the world's knowledge with the power, intelligence and creativity of large language models, drawing on information from the web to provide high-quality responses. Bard will initially be released with a lightweight version of LaMDA, the company's language model for dialogue applications. The release will be used to gather feedback and improve quality and speed. AI-powered features in Google Search, which will distill complex information and multiple perspectives, will soon be rolled out. Google also plans to onboard individual developers and enterprises to build on top of its best AI models.
last updated on 02/06/2023
Google, Facebook and Microsoft helped build the scaffolding of AI. Smaller companies are taking it to the masses, forcing Big Tech to react.
[URL] Abstract: "ChatGPT, the Large Language Model developed by OpenAI, has been acquired by Google for an undisclosed sum. The acquisition has been widely seen as a move to improve Google's AI capabilities and stay ahead of competitors in the field. The ChatGPT model will be integrated into various Google services and products, including search, voice assistants, and translation services. This acquisition highlights the growing importance of AI technology and the increasing demand for advanced language models like ChatGPT."
[text] Abstract: Meta's Blenderbot, a chatbot released by Facebook's parent company three months before OpenAI's ChatGPT, was deemed a failure due to its over-cautious content moderation. In contrast, ChatGPT's conversational style, which allows for discussion on controversial topics, has made it a mainstream success. As ChatGPT continues to grow in popularity, tech giants such as Meta and Google are feeling pressure to speed up their own AI processes, potentially disregarding safety concerns. This has sparked concerns among AI ethicists, who fear that big tech's rush to market may expose billions of people to potential harms before they have been fully studied. Despite these concerns, companies like OpenAI continue to release their AI tools to the public, viewing it as the only way to assess real-world harm. The fast pace of AI progress requires companies to prioritize making the right decisions and releasing AI models that best serve the community, while considering the broader societal impacts of these innovations.ChatGPT Is Dumber Than You Think
Treat it like a toy, not a tool.
Abstract: "The deployment of AI language models like ChatGPT, developed by OpenAI, in writing and journalism raises questions about the ethics of AI-generated content. ChatGPT is capable of producing human-like text, making it difficult to distinguish between AI-generated content and content written by a human. The ethical concerns include issues of accuracy, accountability, and bias in AI-generated writing, as well as the potential for AI systems to manipulate public opinion. The article highlights the need for transparent and responsible deployment of AI language models in journalism and writing, and the importance of considering the ethical implications of AI-generated content."
ChatGPT could be used for good, but like many other AI models, it's rife with racist and discriminatory bias
[URL] abstract: ChatGPT, a large language model developed by OpenAI, is prone to biases like many other AI models. Despite its impressive performance in generating human-like text, the model has been found to have biases based on the training data it was exposed to. These biases can be a result of various factors such as the demographic representation in the training data, cultural norms and values, and other systemic biases present in society. It is crucial for developers and users to be aware of these biases and take steps to mitigate them.
[text] abstract: ChatGPT, an advanced artificial intelligence chatbot developed by OpenAI, is at the forefront of the debate on the potential and dangers of AI. Although the chatbot can assist with writing and speaking tasks and provide entertainment, it has also been used to cheat, plagiarize, spread misinformation, and enable unethical business practices. Furthermore, like many chatbots, it has a history of being biased. OpenAI added guardrails to prevent problematic responses, but these were easily bypassed by users. As AI is increasingly used to make important decisions in various industries, such as healthcare and law enforcement, the issue of bias must be addressed to prevent amplifying existing inequalities. This problem is not unique to ChatGPT and is well-documented, with studies showing the negative effects of biased algorithms. The AI industry is divided on who should be held responsible for these biases and whether harmful models should be released. Despite these concerns, AI models are becoming more prevalent in our lives and researchers are urging caution on the potential for harm due to AI biases.
The US Copyright Office says an AI can’t copyright its art[text] abstract: The US Copyright Office has rejected a request to copyright a work of art created by an AI algorithm named Creativity Machine. A three-person board determined that the AI-generated image lacked the necessary "human authorship" required for protection under copyright law. The board emphasized the vital role of the human mind in creative expression, citing past court decisions which have consistently found that non-human expression is ineligible for copyright protection. This decision highlights the importance of human agency in machine-produced works and raises questions about the limits of AI in the realm of art and copyright.[Google] An important next step on our AI journey[URL] abstract: This blog post provides an overview of BARD (Bidirectional Attention Representation for Documents), a new AI-powered search system developed by Google. BARD leverages the latest advances in natural language processing and machine learning to deliver more relevant and accurate search results, even for complex queries. The system utilizes a bidirectional attention mechanism to better understand the context of each word in a document and make connections between words, which helps to improve the relevance of search results. With BARD, Google aims to provide users with a more intuitive and seamless search experience, enabling them to find what they're looking for more quickly and easily.[text] abstract: Google has opened up its experimental conversational AI service, Bard, to trusted testers ahead of wider availability. Bard aims to combine the world's knowledge with the power, intelligence and creativity of large language models, drawing on information from the web to provide high-quality responses. Bard will initially be released with a lightweight version of LaMDA, the company's language model for dialogue applications. The release will be used to gather feedback and improve quality and speed. AI-powered features in Google Search, which will distill complex information and multiple perspectives, will soon be rolled out. Google also plans to onboard individual developers and enterprises to build on top of its best AI models.
last updated on 02/06/2023
Authored by: Makena Neal & ChatGPT
Posted on: GenAI & Education

Artificial Intelligence & Scholarship
The following are a collection of publicly sourced articles related to the recent rise of artificial intelligence in everyday applications.
note: following each [hyperlinked] title/subtitle are abstracts generated by ChatGPT after receiving prompts from M. Neal. "Write an abstract for [article link]" is indicated with [URL] abstract. "Write an abstract for [article text]" is indicated with [text] abstract. These very similar prompt-responses are shared consecutively in the hope that reading the abstracts will showcase the variations in ChatGPT output (despite similar input).
Tools such as ChatGPT threaten transparent science; here are our ground rules for their use
As researchers dive into the brave new world of advanced AI chatbots, publishers need to acknowledge their legitimate uses and lay down clear guidelines to avoid abuse.
[URL] abstract: The recent release of the AI chatbot ChatGPT by OpenAI has raised concerns among the scientific community regarding the ethical use of Large Language Models (LLMs) in research. The chatbot's ability to produce fluent language has made it easier for students and researchers to pass off LLM-generated text as their own. As a result, publishers and editors have taken a stance on the matter, setting principles for ethical usage of LLMs in research. Nature, along with all Springer Nature journals, has outlined two principles: 1) LLMs will not be accepted as credited authors on research papers as they cannot take responsibility for the work, and 2) researchers using LLMs must document their use in the methods or acknowledgements sections of their work. Currently, editors and publishers can detect LLM-generated output but advancements in AI may make it more challenging. The need for transparency and integrity in research remains a key concern, and the scientific community must ensure that methods and authorship are kept transparent and trustworthy.
[text] abstract: Artificial Intelligence (AI) has gained the ability to generate fluent language, making it increasingly difficult to distinguish from text written by humans. The AI chatbot ChatGPT, developed by OpenAI in San Francisco, has brought the capabilities of large language models (LLMs) to a wider audience, with millions of people using it for various purposes such as generating research papers. However, there are concerns about the potential for LLMs to produce unreliable results and for students and scientists to deceitfully pass off the text as their own. To address these ethical concerns, scientific publishers such as Nature and Springer Nature have established guidelines for the use of LLMs in research, including that no LLM tool will be credited as an author on a paper and that researchers using LLMs must document their use in the methods or acknowledgements sections. Although current methods can detect text generated by LLMs, future developments may improve the LLMs' outputs. As such, researchers should consider how the transparency and trustworthiness of the scientific process can be maintained when using AI tools.
Abstracts written by ChatGPT fool scientists
Researchers cannot always differentiate between AI-generated and original abstracts.
last updated 02/01/2023
note: following each [hyperlinked] title/subtitle are abstracts generated by ChatGPT after receiving prompts from M. Neal. "Write an abstract for [article link]" is indicated with [URL] abstract. "Write an abstract for [article text]" is indicated with [text] abstract. These very similar prompt-responses are shared consecutively in the hope that reading the abstracts will showcase the variations in ChatGPT output (despite similar input).
Tools such as ChatGPT threaten transparent science; here are our ground rules for their use
As researchers dive into the brave new world of advanced AI chatbots, publishers need to acknowledge their legitimate uses and lay down clear guidelines to avoid abuse.
[URL] abstract: The recent release of the AI chatbot ChatGPT by OpenAI has raised concerns among the scientific community regarding the ethical use of Large Language Models (LLMs) in research. The chatbot's ability to produce fluent language has made it easier for students and researchers to pass off LLM-generated text as their own. As a result, publishers and editors have taken a stance on the matter, setting principles for ethical usage of LLMs in research. Nature, along with all Springer Nature journals, has outlined two principles: 1) LLMs will not be accepted as credited authors on research papers as they cannot take responsibility for the work, and 2) researchers using LLMs must document their use in the methods or acknowledgements sections of their work. Currently, editors and publishers can detect LLM-generated output but advancements in AI may make it more challenging. The need for transparency and integrity in research remains a key concern, and the scientific community must ensure that methods and authorship are kept transparent and trustworthy.
[text] abstract: Artificial Intelligence (AI) has gained the ability to generate fluent language, making it increasingly difficult to distinguish from text written by humans. The AI chatbot ChatGPT, developed by OpenAI in San Francisco, has brought the capabilities of large language models (LLMs) to a wider audience, with millions of people using it for various purposes such as generating research papers. However, there are concerns about the potential for LLMs to produce unreliable results and for students and scientists to deceitfully pass off the text as their own. To address these ethical concerns, scientific publishers such as Nature and Springer Nature have established guidelines for the use of LLMs in research, including that no LLM tool will be credited as an author on a paper and that researchers using LLMs must document their use in the methods or acknowledgements sections. Although current methods can detect text generated by LLMs, future developments may improve the LLMs' outputs. As such, researchers should consider how the transparency and trustworthiness of the scientific process can be maintained when using AI tools.
Abstracts written by ChatGPT fool scientists
Researchers cannot always differentiate between AI-generated and original abstracts.
last updated 02/01/2023
Authored by: Makena Neal & ChatGPT
Posted on: GenAI & Education

Artificial Intelligence in Higher Education (and Learning)
The following are a collection of publicly sourced articles related to the recent rise of artificial intelligence in everyday applications.
note: following each [hyperlinked] title/subtitle are abstracts generated by ChatGPT after receiving prompts from M. Neal. "Write an abstract for [article link]" is indicated with [URL] abstract. "Write an abstract for [article text]" is indicated with [text] abstract. These very similar prompt-responses are shared consecutively in the hope that reading the abstracts will showcase the variations in ChatGPT output (despite similar input).
Deconstructing ChatGPT on the Future of Continuing Education
The future has arrived in continuing education—however, many have yet to realize the arrival.
[URL] Abstract: "The rise of AI technology and advancements in language models, such as ChatGPT, are poised to change the face of continuing education. The technology offers the potential for personalized and interactive learning experiences, making education more accessible and convenient for students. However, there are concerns about the potential job displacement and ethical implications of relying heavily on AI-powered learning systems. The article explores the potential benefits and drawbacks of ChatGPT and other AI systems in continuing education, and the need for careful consideration of these implications as the technology continues to evolve."
[text] Abstract: The recently released ChatGPT and GPT-3.5 from OpenAI have the potential to revolutionize the field of education. The advancements in AI technology will support heutagogy, which emphasizes student-centered, self-determined learning. AI will serve as a learning assistant, providing personalized and adaptive paths for learners. The technology promises universal access to education and a brighter future for lifelong learning. OpenAI has also taken steps to protect the system from providing harmful content, using a combination of a special prompt and Reinforcement Learning from Human Feedback (RLHF). Early user responses have been positive, with some stating that ChatGPT could potentially replace human-written essays.
ChatGPT and the rise of AI writers: how should higher education respond?
The proliferation of AI text generators such as ChatGPT has major implications for higher education. Nancy Gleason explores how educators should respond to these tools which can write essays in seconds
[URL] Abstract: "The rise of AI language models like ChatGPT is having a significant impact on writing and journalism, and higher education must respond. As AI technology continues to improve and become more prevalent, it is likely to impact the skills and knowledge required of future writers and journalists. Higher education institutions must consider how to adapt and prepare students for a future where AI-generated writing is increasingly common. The article argues that higher education institutions must play a role in preparing students for the ethical and practical implications of working with AI systems and in shaping the future of writing and journalism in an age of AI."
[text] Abstract: The article discusses the latest developments in deep machine learning, specifically the highly sophisticated AI chatbot ChatGPT developed by OpenAI. The author notes that the new interface has attracted over a million users in just five days and that ChatGPT is free and can generate text in various structured languages. The author also acknowledges potential concerns, such as the accuracy and bias of AI-generated text, but argues that these are dynamic and improving. The author argues that higher education institutions need to embrace the use of AI-generated text, but also need to reassess their pedagogical and assessment practices to incorporate AI-generated text in a way that aligns with the goals of equity, inclusion, and access. The author suggests that assessment should focus on process rather than outcome and gives a sample class activity that encourages students to compare assigned readings with AI-generated content.
Chatgpt And Its Effects On Higher Education (Full article written by ChatGPT)
[URL] Abstract: The use of ChatGPT in higher education has both benefits and drawbacks. ChatGPT is a natural language processing model that can generate human-like text responses to input text, and is trained on large amounts of text data to learn the patterns and structures of human language. This makes it useful for enhancing language understanding, improving writing skills, and increasing accessibility for students with disabilities. However, NLP models can also reproduce and amplify biases and discrimination in the data they are trained on, have limited flexibility, and create a dependence on technology. Cheating on exams with NLP models is also possible and difficult to prevent, but strict monitoring and oversight can help mitigate this risk.
How ChatGPT Could Transform Higher Education
[URL] abstract: This article discusses the potential impact of OpenAI's language model, ChatGPT, on higher education. It highlights the ability of ChatGPT to provide personalized and automated answers to students' questions, allowing for more efficient and accessible education. The article also touches on the potential ethical considerations and the need for proper implementation and regulation in order to maximize the benefits and minimize potential negative consequences.
[text] abstract: ChatGPT is a large language model developed by OpenAI that excels in various text-based tasks, such as writing and composing. In the field of higher education, ChatGPT has the potential to revolutionize the way students and educators approach learning by providing personalized real-time responses to questions and needs. However, there are also significant dangers associated with the use of ChatGPT in higher education, including the potential for the model to reproduce biases and stereotypes and to reduce human interaction and personalized feedback in the learning process.
Freaking Out About ChatGPT—Part I
Artificial intelligence can crank out passable student essays in seconds. What are we going to do?
[text] article: The OpenAI ChatGPT interface has raised concerns in the academic community due to its ability to produce high-quality writing in seconds. The author, who has been monitoring the technology, finds the progress of GPT3 remarkable but also sees it as an opportunity to re-examine teaching practices and align them with pedagogical values. ChatGPT, however, has no understanding of content and cannot synthesize or evaluate information accurately. The author fed ChatGPT sample questions from past AP exams and was surprised by the quality of the responses. The author argues that the prevalence of standardized assessments and templates like the five-paragraph essay has corrupted the learning process and affected student writing abilities and attitudes towards writing and school. To address this, the author suggests providing students with learning experiences of intrinsic interest, utilizing methods of assessment that consider the learning process, and requiring students to demonstrate understanding in different ways.
[URL] abstract: The article "Freaking Out About ChatGPT- Part I" discusses the potential ethical and societal implications of OpenAI's language model, ChatGPT. The author highlights the concerns of chatbots being used for malicious purposes and the consequences of having AI models with advanced language capabilities. The article suggests that as AI technology continues to advance, it is crucial to consider the ethical implications and take steps to mitigate potential negative consequences.
Using Artificial Intelligence in the Classroom
[URL] abstract: This guide provides an overview of using artificial intelligence (AI) in the classroom. It discusses the benefits and challenges of incorporating AI technologies, such as chatbots and machine learning algorithms, into education. It also explores various examples of AI applications in education, such as personalized learning and language tutoring. The guide also provides recommendations for educators and institutions looking to integrate AI into their teaching practices.
[text] Abstract: ChatGPT, a new conversational artificial intelligence (AI) program, has sparked discussions on its potential impact on higher education and the workforce. While some fear its threat, instructors can use it as an opportunity to educate students on AI by incorporating it in their teaching methods. This approach can encourage student participation in a larger societal discussion, help them develop ethical grounding and understanding of academic integrity related to AI. To prepare, instructors should reflect on the potential threats and benefits, test AI themselves, revise assignments, consider data and privacy policies, and identify and communicate opportunities in their course.
4 Ways Universities Can Use AI to Streamline Operations
[URL] Abstract: Artificial intelligence (AI) is increasingly being adopted by universities to streamline their operations and improve the student experience. AI can help universities save time and resources by automating repetitive tasks, such as scheduling, student services, and administrative functions. There are several ways that universities can use AI to streamline operations and improve efficiency.
Personalized Student Services: AI can be used to provide students with personalized support and information, including admission and enrollment guidance, financial aid, and course scheduling.
Improved Admissions Process: AI can automate the time-consuming and repetitive task of reviewing and sorting through large numbers of applications, enabling universities to process applications faster and more efficiently.
Streamlined Administrative Functions: AI can automate routine tasks, such as student registration, grade reporting, and course scheduling, freeing up staff time and resources for more important tasks.
Enhance Student Learning: AI can be integrated into the curriculum, providing students with customized and interactive learning experiences that can improve their comprehension and retention of course material.
In conclusion, AI has the potential to revolutionize the way universities operate, improving efficiency and the student experience. Universities can use AI to streamline operations, save time and resources, and improve the overall educational experience.
[text] Abstract: This article explains the benefits of using AI chatbots in higher education institutions. AI chatbots can help reduce call and email volume, streamline the application process, increase availability and accessibility for students, and improve operational efficiency for staff. For example, chatbots can answer routine questions from students 24/7 and remind students about unfinished applications. This can increase the likelihood of students enrolling, leading to better margins for the institution. By improving operational efficiency, staff can focus on more complex interactions with students. Chatbots have already been successful in reducing costs for universities and colleges.
ChatGPT: A Must-See Before the Semester Begins
[URL] Abstract: The article "ChatGPT: A Must-See Before the Semester Begins" focuses on the application of advanced language model technology in the classroom. It highlights the potential benefits of using OpenAI's ChatGPT for educational purposes, including improved student engagement and increased efficiency in answering frequently asked questions. The article also provides tips for integrating ChatGPT into teaching practices, such as using it for language translation and enhancing discussion forums. Overall, the article emphasizes the importance of utilizing cutting-edge technology in education and presents ChatGPT as a valuable tool for educators.
[text] Abstract: This article discusses the potential impact of AI in education, specifically in the realm of writing. The author highlights the capabilities of a language model called ChatGPT, which is capable of generating various forms of writing including movie scripts, sonnets, grant proposals, and more. The author argues that the advent of AI writing technology could potentially lead to a shift in the way education is approached, moving away from traditional rote learning and towards a model that emphasizes student-driven learning and collaboration with AI. The author also raises questions about the ethical implications of AI writing, and encourages educators to consider the possibilities for creating more meaningful and purposeful learning experiences for students.
How Emotion AI will change the online learning landscape
[URL] Abstract: Emotion AI is a rapidly developing field that uses artificial intelligence to recognize and respond to human emotions. This technology has the potential to revolutionize the online learning landscape by creating a more personalized and engaging learning experience. By using Emotion AI, online learning platforms can track and respond to the emotional state of students, making the learning process more adaptive and effective. This article explores the benefits of Emotion AI in online learning and how it will shape the future of education.
[text] Abstract: Emotion AI, a branch of affective computing, is gaining mainstream attention for its ability to recognize and respond to human emotions. With the growth of technology, understanding both the cognitive and affective channels of human connection is crucial. As such, industries such as automotive, retail, and education are investing in making their technology more human-like using computer vision and speech recognition. In the education sector, Emotion AI is expected to transform online learning by providing a more personalized and engaging experience for learners. By monitoring the emotions of students and providing feedback to instructors, Emotion AI can improve the learning process and tailor the curriculum to each student's needs. The technology has the potential to play an increasingly important role in the online learning landscape and is likely to transform the way we learn in the future.
ChatGPT Advice Academics Can Use Now
To harness the potential and avert the risks of OpenAI’s new chat bot, academics should think a few years out, invite students into the conversation and—most of all—experiment, not panic.
[URL] Abstract: This article reports on advice offered by academic experts on how to use AI language models like ChatGPT in academia. The experts discuss the benefits and limitations of using AI language models, and offer suggestions on how to effectively integrate them into research and teaching. They also emphasize the importance of being aware of ethical considerations, such as ensuring the appropriate use of data and avoiding the spread of misinformation. The article provides valuable insights for academic institutions looking to utilize AI language models in their work.
[text] Abstract: With the advent of ChatGPT, a language model released by OpenAI, the academic community is grappling with the potential and risks posed by this game-changing technology. The bot writes essays, poems, and debates on a wide range of topics and has even earned passing scores on the bar exam. While some see opportunities for accelerating discussions about teaching and learning, others worry about widespread cheating. Experts in higher education suggest a deliberate and flexible approach to ChatGPT. Faculty members should familiarize themselves with AI writing aids and consider their implications for different courses. Administrators should support faculty training and provide clear instructions to students on the use of AI aids. The focus should be on how these tools can achieve learning outcomes and promote equity and access. By thinking a few years ahead and asking the right questions, academic integrity concerns can be minimized while promoting learning outcomes.
Thoughts about the impact of AI text on assessment
[URL] Abstract: The article "Thoughts About the Impact of AI on Text Assessment" explores the potential consequences of the widespread adoption of artificial intelligence in the field of text assessment. The author discusses the various advantages and disadvantages of relying on AI algorithms, including the ability to analyze large amounts of data quickly, the potential for unbiased scoring, and the possibility of oversimplifying complex linguistic phenomena. They also examine the ethical considerations involved in using AI for text assessment, such as accountability, privacy, and the potential for AI to perpetuate existing biases. Overall, the article provides a comprehensive overview of the impact of AI on text assessment and encourages readers to consider both the benefits and challenges of this technology.
[text] Abstract: The use of Large Language Models (LLMs) like ChatGPT has raised concerns in the education community regarding their impact on assessment. The author highlights that AI-generated text is not a new problem, as similar issues exist with the use of search engines and essay-writing services. The truthfulness and trustworthiness of AI-generated text are not exclusive to AI, as people can also write untruthful information. The author suggests that the solution to assessment is not to rely on large texts or quizzes, but to focus on long-term engagement for building reputation. The education system needs to be restructured to keep small class sizes and emphasize 1:1 engagement for quality learning. The author also points out that AIs will play a significant role in our lives and need to be educated to be friendly and not competition.
ADDITIONAL RESOURCES
How to Productively Address AI-Generated Text in Your Classroom
What are ChatGPT and AI-generated text?
How can you most productively address Chat-GPT and other AI text tools?
Why might students use AI-generated text?
How can you identify AI-generated text?
How can you address AI-generated content in your syllabus and course design?
How can you adjust assignments to make them more AI-resistant?
How can you embrace the AI tools for improving student learning?
ChatGPT and AI in Higher Education: Opportunities and Challenges
Presenters: Vivek Astvansh, Paul Cook, Jeanette Heidewald, Rebecca Itow, Kevin Jones
Moderator: Adam Maksl
Recorded Wednesday, January 18, 2023 3 - 4 p.m. EST
A panel of faculty and experts across IU campuses explores the potential uses and challenges of ChatGPT and other AI technologies in teaching and learning. We've seen many media reports in recent weeks about the technology. Discussions have included not only concerns about students using the tool to cheat but also opportunities to use such tech in automating administrative tasks or proactively introducing students to technology likely to shape the future of work.
In this webinar, we provided a brief overview of the technology and explored these topics and more, identifying both challenges and opportunities for teaching and learning. Our panelists provided diverse, cross-disciplinary perspectives. This panel was the first in a series of programs offered across IU campuses on AI and its role in teaching and learning.
note: following each [hyperlinked] title/subtitle are abstracts generated by ChatGPT after receiving prompts from M. Neal. "Write an abstract for [article link]" is indicated with [URL] abstract. "Write an abstract for [article text]" is indicated with [text] abstract. These very similar prompt-responses are shared consecutively in the hope that reading the abstracts will showcase the variations in ChatGPT output (despite similar input).
Deconstructing ChatGPT on the Future of Continuing Education
The future has arrived in continuing education—however, many have yet to realize the arrival.
[URL] Abstract: "The rise of AI technology and advancements in language models, such as ChatGPT, are poised to change the face of continuing education. The technology offers the potential for personalized and interactive learning experiences, making education more accessible and convenient for students. However, there are concerns about the potential job displacement and ethical implications of relying heavily on AI-powered learning systems. The article explores the potential benefits and drawbacks of ChatGPT and other AI systems in continuing education, and the need for careful consideration of these implications as the technology continues to evolve."
[text] Abstract: The recently released ChatGPT and GPT-3.5 from OpenAI have the potential to revolutionize the field of education. The advancements in AI technology will support heutagogy, which emphasizes student-centered, self-determined learning. AI will serve as a learning assistant, providing personalized and adaptive paths for learners. The technology promises universal access to education and a brighter future for lifelong learning. OpenAI has also taken steps to protect the system from providing harmful content, using a combination of a special prompt and Reinforcement Learning from Human Feedback (RLHF). Early user responses have been positive, with some stating that ChatGPT could potentially replace human-written essays.
ChatGPT and the rise of AI writers: how should higher education respond?
The proliferation of AI text generators such as ChatGPT has major implications for higher education. Nancy Gleason explores how educators should respond to these tools which can write essays in seconds
[URL] Abstract: "The rise of AI language models like ChatGPT is having a significant impact on writing and journalism, and higher education must respond. As AI technology continues to improve and become more prevalent, it is likely to impact the skills and knowledge required of future writers and journalists. Higher education institutions must consider how to adapt and prepare students for a future where AI-generated writing is increasingly common. The article argues that higher education institutions must play a role in preparing students for the ethical and practical implications of working with AI systems and in shaping the future of writing and journalism in an age of AI."
[text] Abstract: The article discusses the latest developments in deep machine learning, specifically the highly sophisticated AI chatbot ChatGPT developed by OpenAI. The author notes that the new interface has attracted over a million users in just five days and that ChatGPT is free and can generate text in various structured languages. The author also acknowledges potential concerns, such as the accuracy and bias of AI-generated text, but argues that these are dynamic and improving. The author argues that higher education institutions need to embrace the use of AI-generated text, but also need to reassess their pedagogical and assessment practices to incorporate AI-generated text in a way that aligns with the goals of equity, inclusion, and access. The author suggests that assessment should focus on process rather than outcome and gives a sample class activity that encourages students to compare assigned readings with AI-generated content.
Chatgpt And Its Effects On Higher Education (Full article written by ChatGPT)
[URL] Abstract: The use of ChatGPT in higher education has both benefits and drawbacks. ChatGPT is a natural language processing model that can generate human-like text responses to input text, and is trained on large amounts of text data to learn the patterns and structures of human language. This makes it useful for enhancing language understanding, improving writing skills, and increasing accessibility for students with disabilities. However, NLP models can also reproduce and amplify biases and discrimination in the data they are trained on, have limited flexibility, and create a dependence on technology. Cheating on exams with NLP models is also possible and difficult to prevent, but strict monitoring and oversight can help mitigate this risk.
How ChatGPT Could Transform Higher Education
[URL] abstract: This article discusses the potential impact of OpenAI's language model, ChatGPT, on higher education. It highlights the ability of ChatGPT to provide personalized and automated answers to students' questions, allowing for more efficient and accessible education. The article also touches on the potential ethical considerations and the need for proper implementation and regulation in order to maximize the benefits and minimize potential negative consequences.
[text] abstract: ChatGPT is a large language model developed by OpenAI that excels in various text-based tasks, such as writing and composing. In the field of higher education, ChatGPT has the potential to revolutionize the way students and educators approach learning by providing personalized real-time responses to questions and needs. However, there are also significant dangers associated with the use of ChatGPT in higher education, including the potential for the model to reproduce biases and stereotypes and to reduce human interaction and personalized feedback in the learning process.
Freaking Out About ChatGPT—Part I
Artificial intelligence can crank out passable student essays in seconds. What are we going to do?
[text] article: The OpenAI ChatGPT interface has raised concerns in the academic community due to its ability to produce high-quality writing in seconds. The author, who has been monitoring the technology, finds the progress of GPT3 remarkable but also sees it as an opportunity to re-examine teaching practices and align them with pedagogical values. ChatGPT, however, has no understanding of content and cannot synthesize or evaluate information accurately. The author fed ChatGPT sample questions from past AP exams and was surprised by the quality of the responses. The author argues that the prevalence of standardized assessments and templates like the five-paragraph essay has corrupted the learning process and affected student writing abilities and attitudes towards writing and school. To address this, the author suggests providing students with learning experiences of intrinsic interest, utilizing methods of assessment that consider the learning process, and requiring students to demonstrate understanding in different ways.
[URL] abstract: The article "Freaking Out About ChatGPT- Part I" discusses the potential ethical and societal implications of OpenAI's language model, ChatGPT. The author highlights the concerns of chatbots being used for malicious purposes and the consequences of having AI models with advanced language capabilities. The article suggests that as AI technology continues to advance, it is crucial to consider the ethical implications and take steps to mitigate potential negative consequences.
Using Artificial Intelligence in the Classroom
[URL] abstract: This guide provides an overview of using artificial intelligence (AI) in the classroom. It discusses the benefits and challenges of incorporating AI technologies, such as chatbots and machine learning algorithms, into education. It also explores various examples of AI applications in education, such as personalized learning and language tutoring. The guide also provides recommendations for educators and institutions looking to integrate AI into their teaching practices.
[text] Abstract: ChatGPT, a new conversational artificial intelligence (AI) program, has sparked discussions on its potential impact on higher education and the workforce. While some fear its threat, instructors can use it as an opportunity to educate students on AI by incorporating it in their teaching methods. This approach can encourage student participation in a larger societal discussion, help them develop ethical grounding and understanding of academic integrity related to AI. To prepare, instructors should reflect on the potential threats and benefits, test AI themselves, revise assignments, consider data and privacy policies, and identify and communicate opportunities in their course.
4 Ways Universities Can Use AI to Streamline Operations
[URL] Abstract: Artificial intelligence (AI) is increasingly being adopted by universities to streamline their operations and improve the student experience. AI can help universities save time and resources by automating repetitive tasks, such as scheduling, student services, and administrative functions. There are several ways that universities can use AI to streamline operations and improve efficiency.
Personalized Student Services: AI can be used to provide students with personalized support and information, including admission and enrollment guidance, financial aid, and course scheduling.
Improved Admissions Process: AI can automate the time-consuming and repetitive task of reviewing and sorting through large numbers of applications, enabling universities to process applications faster and more efficiently.
Streamlined Administrative Functions: AI can automate routine tasks, such as student registration, grade reporting, and course scheduling, freeing up staff time and resources for more important tasks.
Enhance Student Learning: AI can be integrated into the curriculum, providing students with customized and interactive learning experiences that can improve their comprehension and retention of course material.
In conclusion, AI has the potential to revolutionize the way universities operate, improving efficiency and the student experience. Universities can use AI to streamline operations, save time and resources, and improve the overall educational experience.
[text] Abstract: This article explains the benefits of using AI chatbots in higher education institutions. AI chatbots can help reduce call and email volume, streamline the application process, increase availability and accessibility for students, and improve operational efficiency for staff. For example, chatbots can answer routine questions from students 24/7 and remind students about unfinished applications. This can increase the likelihood of students enrolling, leading to better margins for the institution. By improving operational efficiency, staff can focus on more complex interactions with students. Chatbots have already been successful in reducing costs for universities and colleges.
ChatGPT: A Must-See Before the Semester Begins
[URL] Abstract: The article "ChatGPT: A Must-See Before the Semester Begins" focuses on the application of advanced language model technology in the classroom. It highlights the potential benefits of using OpenAI's ChatGPT for educational purposes, including improved student engagement and increased efficiency in answering frequently asked questions. The article also provides tips for integrating ChatGPT into teaching practices, such as using it for language translation and enhancing discussion forums. Overall, the article emphasizes the importance of utilizing cutting-edge technology in education and presents ChatGPT as a valuable tool for educators.
[text] Abstract: This article discusses the potential impact of AI in education, specifically in the realm of writing. The author highlights the capabilities of a language model called ChatGPT, which is capable of generating various forms of writing including movie scripts, sonnets, grant proposals, and more. The author argues that the advent of AI writing technology could potentially lead to a shift in the way education is approached, moving away from traditional rote learning and towards a model that emphasizes student-driven learning and collaboration with AI. The author also raises questions about the ethical implications of AI writing, and encourages educators to consider the possibilities for creating more meaningful and purposeful learning experiences for students.
How Emotion AI will change the online learning landscape
[URL] Abstract: Emotion AI is a rapidly developing field that uses artificial intelligence to recognize and respond to human emotions. This technology has the potential to revolutionize the online learning landscape by creating a more personalized and engaging learning experience. By using Emotion AI, online learning platforms can track and respond to the emotional state of students, making the learning process more adaptive and effective. This article explores the benefits of Emotion AI in online learning and how it will shape the future of education.
[text] Abstract: Emotion AI, a branch of affective computing, is gaining mainstream attention for its ability to recognize and respond to human emotions. With the growth of technology, understanding both the cognitive and affective channels of human connection is crucial. As such, industries such as automotive, retail, and education are investing in making their technology more human-like using computer vision and speech recognition. In the education sector, Emotion AI is expected to transform online learning by providing a more personalized and engaging experience for learners. By monitoring the emotions of students and providing feedback to instructors, Emotion AI can improve the learning process and tailor the curriculum to each student's needs. The technology has the potential to play an increasingly important role in the online learning landscape and is likely to transform the way we learn in the future.
ChatGPT Advice Academics Can Use Now
To harness the potential and avert the risks of OpenAI’s new chat bot, academics should think a few years out, invite students into the conversation and—most of all—experiment, not panic.
[URL] Abstract: This article reports on advice offered by academic experts on how to use AI language models like ChatGPT in academia. The experts discuss the benefits and limitations of using AI language models, and offer suggestions on how to effectively integrate them into research and teaching. They also emphasize the importance of being aware of ethical considerations, such as ensuring the appropriate use of data and avoiding the spread of misinformation. The article provides valuable insights for academic institutions looking to utilize AI language models in their work.
[text] Abstract: With the advent of ChatGPT, a language model released by OpenAI, the academic community is grappling with the potential and risks posed by this game-changing technology. The bot writes essays, poems, and debates on a wide range of topics and has even earned passing scores on the bar exam. While some see opportunities for accelerating discussions about teaching and learning, others worry about widespread cheating. Experts in higher education suggest a deliberate and flexible approach to ChatGPT. Faculty members should familiarize themselves with AI writing aids and consider their implications for different courses. Administrators should support faculty training and provide clear instructions to students on the use of AI aids. The focus should be on how these tools can achieve learning outcomes and promote equity and access. By thinking a few years ahead and asking the right questions, academic integrity concerns can be minimized while promoting learning outcomes.
Thoughts about the impact of AI text on assessment
[URL] Abstract: The article "Thoughts About the Impact of AI on Text Assessment" explores the potential consequences of the widespread adoption of artificial intelligence in the field of text assessment. The author discusses the various advantages and disadvantages of relying on AI algorithms, including the ability to analyze large amounts of data quickly, the potential for unbiased scoring, and the possibility of oversimplifying complex linguistic phenomena. They also examine the ethical considerations involved in using AI for text assessment, such as accountability, privacy, and the potential for AI to perpetuate existing biases. Overall, the article provides a comprehensive overview of the impact of AI on text assessment and encourages readers to consider both the benefits and challenges of this technology.
[text] Abstract: The use of Large Language Models (LLMs) like ChatGPT has raised concerns in the education community regarding their impact on assessment. The author highlights that AI-generated text is not a new problem, as similar issues exist with the use of search engines and essay-writing services. The truthfulness and trustworthiness of AI-generated text are not exclusive to AI, as people can also write untruthful information. The author suggests that the solution to assessment is not to rely on large texts or quizzes, but to focus on long-term engagement for building reputation. The education system needs to be restructured to keep small class sizes and emphasize 1:1 engagement for quality learning. The author also points out that AIs will play a significant role in our lives and need to be educated to be friendly and not competition.
ADDITIONAL RESOURCES
How to Productively Address AI-Generated Text in Your Classroom
What are ChatGPT and AI-generated text?
How can you most productively address Chat-GPT and other AI text tools?
Why might students use AI-generated text?
How can you identify AI-generated text?
How can you address AI-generated content in your syllabus and course design?
How can you adjust assignments to make them more AI-resistant?
How can you embrace the AI tools for improving student learning?
ChatGPT and AI in Higher Education: Opportunities and Challenges
Presenters: Vivek Astvansh, Paul Cook, Jeanette Heidewald, Rebecca Itow, Kevin Jones
Moderator: Adam Maksl
Recorded Wednesday, January 18, 2023 3 - 4 p.m. EST
A panel of faculty and experts across IU campuses explores the potential uses and challenges of ChatGPT and other AI technologies in teaching and learning. We've seen many media reports in recent weeks about the technology. Discussions have included not only concerns about students using the tool to cheat but also opportunities to use such tech in automating administrative tasks or proactively introducing students to technology likely to shape the future of work.
In this webinar, we provided a brief overview of the technology and explored these topics and more, identifying both challenges and opportunities for teaching and learning. Our panelists provided diverse, cross-disciplinary perspectives. This panel was the first in a series of programs offered across IU campuses on AI and its role in teaching and learning.
Authored by: Makena Neal & Chat GPT
Posted on: GenAI & Education

Artificial Intelligence in ELA & Writing
The following are a collection of publicly sourced articles related to the recent rise of artificial intelligence in everyday applications.
note: following each [hyperlinked] title/subtitle are abstracts generated by ChatGPT after receiving prompts from M. Neal. "Write an abstract for [article link]" is indicated with [URL] abstract. "Write an abstract for [article text]" is indicated with [text] abstract. These very similar prompt-responses are shared consecutively in the hope that reading the abstracts will showcase the variations in ChatGPT output (despite similar input).
Robots Are Writing Poetry, and Many People Can’t Tell the Difference
Machines are putting out astonishingly human writing. What does that mean for the future of art?
[URL] abstract: "AI Poetry" is an article published by The Walrus which explores the use of artificial intelligence in the creation of poetry. The piece delves into the potential of AI to generate poems that match human-like creativity, emotions and imagination. It also highlights the limitations of AI-generated poetry and raises questions about the future of poetry in an increasingly digital world. The article provides a comprehensive analysis of AI's impact on the art of poetry and offers insights into the role that human poets will play in the coming years.
Critical Ai: Adapting College Writing For The Age Of Large Language Models Such As Chatgpt: Some Next Steps For Educators
Large language models (LLMs) such as ChatGPT are sophisticated statistical models that predict probable word sequences in response to a prompt even though they do not “understand” language in any human-like sense. Through intensive mining, modeling, and memorization of vast stores of language data “scraped” from the internet, these text generators deliver a few paragraphs at a time which resemble writing authored by humans. This synthetic text is not directly “plagiarized” from some original, and it is usually grammatically and syntactically well-crafted.
From an academic integrity perspective, this means that “AI”-generated writing
1) is not easily identifiable as such to the unpracticed eye;
2) does not conform to “plagiarism” as that term is typically understood by teachers and students; and
3) encourages students to think of writing as task-specific labor disconnected from learning and the application of critical thinking.
last updated on 02/01/2023
note: following each [hyperlinked] title/subtitle are abstracts generated by ChatGPT after receiving prompts from M. Neal. "Write an abstract for [article link]" is indicated with [URL] abstract. "Write an abstract for [article text]" is indicated with [text] abstract. These very similar prompt-responses are shared consecutively in the hope that reading the abstracts will showcase the variations in ChatGPT output (despite similar input).
Robots Are Writing Poetry, and Many People Can’t Tell the Difference
Machines are putting out astonishingly human writing. What does that mean for the future of art?
[URL] abstract: "AI Poetry" is an article published by The Walrus which explores the use of artificial intelligence in the creation of poetry. The piece delves into the potential of AI to generate poems that match human-like creativity, emotions and imagination. It also highlights the limitations of AI-generated poetry and raises questions about the future of poetry in an increasingly digital world. The article provides a comprehensive analysis of AI's impact on the art of poetry and offers insights into the role that human poets will play in the coming years.
Critical Ai: Adapting College Writing For The Age Of Large Language Models Such As Chatgpt: Some Next Steps For Educators
Large language models (LLMs) such as ChatGPT are sophisticated statistical models that predict probable word sequences in response to a prompt even though they do not “understand” language in any human-like sense. Through intensive mining, modeling, and memorization of vast stores of language data “scraped” from the internet, these text generators deliver a few paragraphs at a time which resemble writing authored by humans. This synthetic text is not directly “plagiarized” from some original, and it is usually grammatically and syntactically well-crafted.
From an academic integrity perspective, this means that “AI”-generated writing
1) is not easily identifiable as such to the unpracticed eye;
2) does not conform to “plagiarism” as that term is typically understood by teachers and students; and
3) encourages students to think of writing as task-specific labor disconnected from learning and the application of critical thinking.
last updated on 02/01/2023
Authored by: Makena Neal & ChatGPT
Posted on: GenAI & Education

Identifying Artificial Intelligence & other considerations for addressing
The following are a collection of publicly sourced articles related to the recent rise of artificial intelligence in everyday applications.
note: following each [hyperlinked] title/subtitle are abstracts generated by ChatGPT after receiving prompts from M. Neal. "Write an abstract for [article link]" is indicated with [URL] abstract. "Write an abstract for [article text]" is indicated with [text] abstract. These very similar prompt-responses are shared consecutively in the hope that reading the abstracts will showcase the variations in ChatGPT output (despite similar input).
How to Productively Address AI-Generated Text in Your Classroom
[URL] abstract: The article on "AI Generated Text" from Indiana University's Center for Innovative Teaching and Learning (CITL) discusses the increasing use of artificial intelligence (AI) technology in generating text and its potential impact on academic integrity. It highlights the importance of understanding the limitations and biases present in AI generated text and the responsibility of educators and students to critically evaluate and verify the authenticity of such text. The article provides practical tips and resources for identifying AI generated text and emphasizes the need for maintaining ethical standards in the use of AI in academic settings.
[text] abstract: “text too long”
How to Identify chatGPT stories?
[text] abstract: An editor shares insights on how to detect AI-generated text using GPT and offers alternatives such as observing telltale signs of machine-generated text, reading the story carefully, and comparing it to other sources. The editor notes that detectors are often wrong, and suggests using common sense and considering the value the text provides to the reader. The article concludes that the best way to identify texts produced by GPT is through layout and writing style, but the ultimate goal is to satisfy informational, entertainment, and emotional needs.
[URL] abstract: This article provides tips on how to identify AI-generated text produced by OpenAI's ChatGPT model. The article suggests to look for repetitive phrases, lack of detail, and absence of realistic human emotion as signs of machine-generated text. It also suggests using a plagiarism checker and comparing the story with other sources to check for inconsistencies and factual errors. The author acknowledges that detectors may not be entirely accurate and advises to use common sense and consider the value the text provides to the reader. The article concludes by emphasizing that what matters is how well the text satisfies our informational, entertainment, and emotional needs, rather than who or what wrote the text.Additional Resources for Identifying AIThe CheckGPT tool developed by Packback appears to be useful in guiding instructors on their responses to potential usage of AI-generated materials. More information about the tool can be found at https://labs.packback.co/originality.last updated on 03/02/2023
note: following each [hyperlinked] title/subtitle are abstracts generated by ChatGPT after receiving prompts from M. Neal. "Write an abstract for [article link]" is indicated with [URL] abstract. "Write an abstract for [article text]" is indicated with [text] abstract. These very similar prompt-responses are shared consecutively in the hope that reading the abstracts will showcase the variations in ChatGPT output (despite similar input).
How to Productively Address AI-Generated Text in Your Classroom
[URL] abstract: The article on "AI Generated Text" from Indiana University's Center for Innovative Teaching and Learning (CITL) discusses the increasing use of artificial intelligence (AI) technology in generating text and its potential impact on academic integrity. It highlights the importance of understanding the limitations and biases present in AI generated text and the responsibility of educators and students to critically evaluate and verify the authenticity of such text. The article provides practical tips and resources for identifying AI generated text and emphasizes the need for maintaining ethical standards in the use of AI in academic settings.
[text] abstract: “text too long”
How to Identify chatGPT stories?
[text] abstract: An editor shares insights on how to detect AI-generated text using GPT and offers alternatives such as observing telltale signs of machine-generated text, reading the story carefully, and comparing it to other sources. The editor notes that detectors are often wrong, and suggests using common sense and considering the value the text provides to the reader. The article concludes that the best way to identify texts produced by GPT is through layout and writing style, but the ultimate goal is to satisfy informational, entertainment, and emotional needs.
[URL] abstract: This article provides tips on how to identify AI-generated text produced by OpenAI's ChatGPT model. The article suggests to look for repetitive phrases, lack of detail, and absence of realistic human emotion as signs of machine-generated text. It also suggests using a plagiarism checker and comparing the story with other sources to check for inconsistencies and factual errors. The author acknowledges that detectors may not be entirely accurate and advises to use common sense and consider the value the text provides to the reader. The article concludes by emphasizing that what matters is how well the text satisfies our informational, entertainment, and emotional needs, rather than who or what wrote the text.Additional Resources for Identifying AIThe CheckGPT tool developed by Packback appears to be useful in guiding instructors on their responses to potential usage of AI-generated materials. More information about the tool can be found at https://labs.packback.co/originality.last updated on 03/02/2023
Authored by: Makena Neal & ChatGPT
Posted on: The MSU Graduate Le...
Emotional Intelligence with Dr. Lew Dotterer
Authored by: Dr. Lew Dotterer
Navigating Context
Posted on: #iteachmsu

Students Are Using AI: So What Now?
Students are Using AI: So What Now?
A conundrum for college instructors
I want to spend time researching and sharing how AI can be a teacher, but in truth, AI is much better at being a student.
It makes sense, then, that the first response most universities had to the release of ChatGPT was immediate guidance about academic integrity, plagiarism, and misuse. Teaching with technology is a constant balance for instructors between authenticity and policing: I have had many consultations with instructors who must always consider how an assignment could be plagiarized, a resource misused, or a quiz completed with the help of the internet. Between Chegg, Wikipedia, and now generative AI, instructors are right to be weary of how their students use technology in their courses.
But it is not without the structures of power that students lean on technology to do their learning for them. A sophomore undergraduate student pays over $1600 for a three-credit class. A year at MSU for a freshman is budgeted at $35,588, (that's over half of my annual salary)! For some, it is financially detrimental to fail a class; and a lowered GPA could put scholarship eligibility at risk. No wonder that some students will use technology to get a good grade.
I know that quantifying education in terms of dollars is only one lens to evaluate student conduct. Young people get far more out of college than a degree– experiential learning, travel opportunities, and relationships with world-class instructors: these are things that MSU especially excels in.
So, if AI is really good at getting assignments done–at answering discussion posts and brainstorming research ideas and editing essays: can we as educators help guide students in their use? We fear that AI will make it easier to be a student, and I wonder if that is at the core of our anxiety.
There are many on campus who are thinking about this conundrum as they prep for the upcoming fall semester. They are:
Exploring how AI will be used in their fields.
Creating lessons on using AI for their class.
Measuring student AI literacy.
Discussing the impact of AI on the environment.
Revamping assignments to be AI-proof.
Adding language about AI use into their syllabi.
Attending workshops and collaborating with colleagues.
Using AI for lesson planning and course design.
By doing any of these activities, instructors are not just getting ready for AI—they’re getting a real sense of how their students might be using it too. And I think that’s critical to where we are right now in conversations about higher ed and generative AI use: students are using it, so let’s find out how. Maybe then we can address those larger structural barriers of what academia is facing.
Side note: I asked Claude to review this post, and it told me to add a call to action, provide concrete suggestions, and provide a vision for the future. So, here’s a call to action to sweeten the pot:
If any of this resonates with you, let’s chat! I’m happy to help brainstorm or find resources or help you learn a new tool. And speaking of resources, AI Commons launches this week. Featuring stories from all over campus, AI Commons is a space for educators describing their experiences and sharing their ideas. Check out the site or submit a story!
Freyesaur out.
A conundrum for college instructors
I want to spend time researching and sharing how AI can be a teacher, but in truth, AI is much better at being a student.
It makes sense, then, that the first response most universities had to the release of ChatGPT was immediate guidance about academic integrity, plagiarism, and misuse. Teaching with technology is a constant balance for instructors between authenticity and policing: I have had many consultations with instructors who must always consider how an assignment could be plagiarized, a resource misused, or a quiz completed with the help of the internet. Between Chegg, Wikipedia, and now generative AI, instructors are right to be weary of how their students use technology in their courses.
But it is not without the structures of power that students lean on technology to do their learning for them. A sophomore undergraduate student pays over $1600 for a three-credit class. A year at MSU for a freshman is budgeted at $35,588, (that's over half of my annual salary)! For some, it is financially detrimental to fail a class; and a lowered GPA could put scholarship eligibility at risk. No wonder that some students will use technology to get a good grade.
I know that quantifying education in terms of dollars is only one lens to evaluate student conduct. Young people get far more out of college than a degree– experiential learning, travel opportunities, and relationships with world-class instructors: these are things that MSU especially excels in.
So, if AI is really good at getting assignments done–at answering discussion posts and brainstorming research ideas and editing essays: can we as educators help guide students in their use? We fear that AI will make it easier to be a student, and I wonder if that is at the core of our anxiety.
There are many on campus who are thinking about this conundrum as they prep for the upcoming fall semester. They are:
Exploring how AI will be used in their fields.
Creating lessons on using AI for their class.
Measuring student AI literacy.
Discussing the impact of AI on the environment.
Revamping assignments to be AI-proof.
Adding language about AI use into their syllabi.
Attending workshops and collaborating with colleagues.
Using AI for lesson planning and course design.
By doing any of these activities, instructors are not just getting ready for AI—they’re getting a real sense of how their students might be using it too. And I think that’s critical to where we are right now in conversations about higher ed and generative AI use: students are using it, so let’s find out how. Maybe then we can address those larger structural barriers of what academia is facing.
Side note: I asked Claude to review this post, and it told me to add a call to action, provide concrete suggestions, and provide a vision for the future. So, here’s a call to action to sweeten the pot:
If any of this resonates with you, let’s chat! I’m happy to help brainstorm or find resources or help you learn a new tool. And speaking of resources, AI Commons launches this week. Featuring stories from all over campus, AI Commons is a space for educators describing their experiences and sharing their ideas. Check out the site or submit a story!
Freyesaur out.
Authored by: Freyesaur
Posted on: GenAI & Education

Complete Guide to Incorporating Generative AI in Your Syllabus
(Photo by Steve Johnson on Unsplash )
You can also access the Generative AI Syllabus Guide Playlist with this content broken down into the following sections. Table of Contents:
MSU Guidance and [Non]Permitted Uses
Developing and Communicating a Course-level Generative AI Use policy
Example Syllabus Statements for the Use of AI Tools in Your Course
Design For Generative AI (restrict, permit, require)
Design Around Generative AI (ban)
Example Statements from Current USA, Higher Education Educators
Developing your Scholarly and Ethical Approaches to Generative AI
Beyond Syllabi Language
Additional considerations to help you develop your generative AI philosophy (Watkins, 2022)
References
The following MSU-specifics should be used to inform your decisions...
Overall guidance: We collectively share the responsibility to uphold intellectual honesty and scholarly integrity. These are core principles that may be compromised by the misuse of GenAI tools, particularly when GenAI-generated content is presented as original, human-created work.
Permitted uses in Teaching & Learning: Instructors are expected to establish a course-specific guidance that defines the appropriate and inappropriate use of GenAI tools.
Students may only use GenAI tools to support their coursework in ways explicitly permitted by the instructor.
Non-permissible uses:
Do not Use GenAI to deliberately fabricate, falsify, impersonate, or mislead, unless explicitly approved for instruction or research in a controlled environment.
Do not Record or process sensitive, confidential, or regulated information withnon-MSU GenAI tools.
Do not Enter FERPA-protected student records, PII, PHI, financial, or HR data into unapproved tools; comply with MSU’s data policy and all regulations.
Do not Use export-controlled data or CUI with GenAI tools unless approved for MSU’s Regulated Research Enclave (RRE).
Developing and Communicating a Course-level Generative AI Use policy
A well-prepared course should be designed for ("restrict", "permit" or "require") or designed around ("ban") generative AI. Courses designed for AI should detail the ways and degrees to which generative AI use will be incorporated into activities and assessments. Courses designed for AI may incorporate AI for some activities and not others and depending on course AI may be explicitly excluded or included at different stages. Courses designed around AI may discuss impacts of generative AI as a topic but expectations are that students will not use these types of tools, and the course should be intentionally designed such that the use of generative AI would either not be conducive to the completion of assessments and activities, or such that the attempt to do so would prove overly cumbersome.
Regardless of your approach, communicating your expectations and rationale to learners is imperative.
Set clear expectations. Be clear in your syllabus about your policies for when, where, and how students should be using generative AI tools, and how to appropriately acknowledge (e.g., cite, reference) when they do use generative AI tools. If you are requiring students to use generative AI tools, these expectations should also be communicated in the syllabus and if students are incurring costs, these should be detailed in the course description on the Registrar’s website.
Regardless of your approach, you might include time for ethics discussions. Add time into your course to discuss the ethical implications of chatGPT and forthcoming AI systems. Talk with students about the ethics of using generative AI tools in your course, at your university, and within your discipline or profession. Don’t be afraid to discuss the gray areas where we do not yet have clear guidance or answers; gray areas are often the places where learning becomes most engaging.
Example Syllabus Statements for the Use of AI Tools in Your Course
There is no “one size fits all policy” for AI uses in higher education. Much like attendance/participation policies, GenAI course-level rules and statements will be determined by individual instructors, departments, and programs. The following resource is provided to assist you in developing coherent policies on the use of generative AI tools in your course, within MSU's guideline. Please adjust these examples to fit your particular context. Remember communication of your course generative AI policies should not only be listed in your syllabus, but also explicitly included in assignment descriptions where AI use is allowed or disallowed.
It is your responsibility as instructor to note and explain your individual course-level rule. A conversation with your department is highly recommended so that generative AI use in the classroom reflects broader use in the unit and discipline. If you have specific questions about writing your course rules, please reach out to the Center for Teaching and Learning Innovation.
Design For Generative AI
Restrict [This syllabus statement is useful when you are allowing the use of AI tools for certain purposes, but not for others. Adjust this statement to reflect your particular parameters of acceptable use. The following is an example.]
Example1:
The use of generative AI tools (e.g. ChatGPT, Dall-e, etc.) is permitted in this course for the following activities:
[insert permitted your course activities here*]
The use of generative AI tools is not permitted in this course for the following activities:
[insert not permitted your course activities here*]
You are responsible for the information you submit based on an AI query (for instance, that it does not violate intellectual property laws, or contain misinformation or unethical content). Your use of AI tools must be properly documented and cited in order to stay within university policies on academic integrity and the Spartan Code of Honor Academic Pledge.
Example2: Taken, with slight modification, from Temple University’s Center for the Advancement of Teaching to demonstrate the kinds of permitted/restricted activity an instructor could denote.
The use of generative AI tools (e.g. ChatGPT, Dall-e, etc.) is permitted in this course for the following activities:
Brainstorming and refining your ideas;
Fine tuning your research questions;
Finding information on your topic;
Drafting an outline to organize your thoughts; and
Checking grammar and style.
The use of generative AI tools is not permitted in this course for the following activities:
Impersonating you in classroom contexts, such as by using the tool to compose discussion board prompts assigned to you or content that you put into a Zoom chat.
Completing group work that your group has assigned to you, unless it is mutually agreed within your group and in alignment with course policy that you may utilize the tool.
Writing a draft of a writing assignment.
Writing entire sentences, paragraphs or papers to complete class assignments.
You are responsible for the information you submit based on an AI query (for instance, that it does not violate intellectual property laws, or contain misinformation or unethical content). Your use of AI tools must be properly documented and cited in order to stay within university policies on academic integrity and the Spartan Code of Honor Academic Pledge. For example, [Insert citation style for your discipline. See these resources for APA guidance, and for other citation formats.]. Any assignment that is found to have used generative AI tools in unauthorized ways [insert the penalty here*]. When in doubt about permitted usage, please ask for clarification.
Use permitted [This syllabus statement is useful when you are allowing, and perhaps encouraging, broad use of generative AI tools. Adjust this statement to reflect your particular parameters of acceptable use in your course. The following is an example.]
Example:
You are welcome to use generative AI tools (e.g. ChatGPT, Dall-e, etc.) in this class as doing so aligns with the course learning goal [insert the course learning goal use of AI aligns with here*]. You are responsible for the information you submit based on an AI query (for instance, that it does not violate intellectual property laws, or contain misinformation or unethical content). Your use of AI tools must be properly documented and cited in order to stay within university policies on academic integrity and the Spartan Code of Honor Academic Pledge.
Use required [This syllabus statement is useful when you have certain assignments that will require that students use generative AI tools. Adjust this statement to reflect your particular parameters of acceptable use. The following is an example.]
Example:
You will be expected to use generative AI tools (e.g. ChatGPT, Dall-e, etc.) in this class as doing so aligns with the course learning goal [insert the course learning goal use of AI aligns with]. Our class will make use of the [insert name of tool(s) here*] tool, and you can gain access to it by [insert instructions for accessing tool(s) here*]. You are responsible for the information you submit based on an AI query (for instance, that it does not violate intellectual property laws, or contain misinformation or unethical content). Your use of AI tools must be properly documented and cited in order to stay within university policies on academic integrity and the Spartan Code of Honor Academic Pledge.
Design Around Generative AI
Ban [This syllabus statement is useful when you are forbidding all use of generative AI tools for any purpose in your class. Adjust this statement to reflect your particular parameters of acceptable use. The following is an example.]
The use of generative AI tools (such as ChatGPT, DALL-E, etc.) is not permitted in this class; therefore, any use of AI tools for work in this class may be considered a violation of Michigan State University’s policy on academic integrity, the Spartan Code of Honor Academic Pledge andStudent Rights and Responsibilities, since the work is not your own. The use of unauthorized AI tools will result in [insert the penalty here*].
CONCERN: The ubiquity of generative AI tools, including their integration into Google search results and MS Office products, means that an outright generative AI ban is implausible for any activity that makes use of the Internet or MS Office Suite.
* It is highly recommended that you have conversations in your department about the appropriate penalties for unauthorized use of an AI. It is important to think about the appropriate level of penalty for first-time offenders and those who repeatedly violate your policies on the use of AI.
Example Statements from Current USA, Higher Education Educators
This collection of example statements are a compilation from a variety of sources including Faculty Learning Community (FLC) at Cleveland State University, Ohio University’s AI, ChatGPT and Teaching and Learning, and some of Michigan State University’s own educators! (If you have an example generative AI policy from your course that you’d be willing to share, please add it to the comments below or e-mail it to MSU Center for Teaching and Learning Innovation at teaching@msu.edu) NOTE: making your own course-level determination of "ban", "restrict", "permit", or "require" and using the sample language is the best, first place to start!
“AI (artificial intelligence) resources such as ChatGPT can be useful in a number of ways. Because it can also be abused, however, you are required to acknowledge use of AI in any work you submit for class. Text directly copied from AI sites must be treated as any other direct quote and properly cited. Other uses of AI must be clearly described at the end of your assignment.” -Claire Hughes-Lynch
“While AI tools can be useful for completing assignments and detecting plagiarism, it is important to use them responsibly and ethically. Practice based on these guidelines as a future or current K-12 teacher. The following are some guidelines for what not to do when using AI in your assignments and for plagiarism detection:
Do not rely solely on AI tools to complete assignments. It is important to understand the material and complete assignments on your own, using AI tools as a supplement rather than a replacement for your own work.
Do not use AI tools to plagiarize*. Using AI to generate or modify content to evade plagiarism detection is unethical and violates academic integrity.
Do not assume that AI responses are always correct. It has been noted that AI can generate fake results.* Please see the plagiarism/academic integrity policy in the course syllabus.” -Selma Koc
“Intellectual honesty is vital to an academic community and for my fair evaluation of your work. All work submitted in this course must be your own, completed in accordance with the University’s academic regulations. Use of AI tools, including ChatGPT, is permitted in this course. Nevertheless, you are only encouraged to use AI tools to help brainstorm assignments or projects or to revise existing work you have written. It is solely your responsibility to make all submitted work your own, maintain academic integrity, and avoid any type of plagiarism. Be aware that the accuracy or quality of AI generated content may not meet the standards of this course, even if you only incorporate such content partially and after substantial paraphrasing, modification and/or editing. Also keep in mind that AI generated content may not provide appropriate or clear attribution to the author(s) of the original sources, while most written assignments in this course require you to find and incorporate highly relevant peer-reviewed scholarly publications following guidelines in the latest publication manual of the APA. Lastly, as your instructor, I reserve the right to use various plagiarism checking tools in evaluating your work, including those screening for AI-generated content, and impose consequences accordingly.” -Xiongyi Liu
“If you are ever unsure about whether collaboration with others, including using artificial intelligence, is allowed or not, please ask me right away. For the labs, although you may discuss them in groups (and try using AI), you must all create your own code, output and answers. Quizzes will be done in class and must be solely your own work. You alone are always responsible for the correctness of the final answers and assignments you submit.” - Emily Rauschert on AI as collaboration partner
“Chat GPT: The use of Chat GTP is neither encouraged nor prohibited from use on assignments for GAD 250. Chat GPT is quickly becoming a communication tool in most business settings. Therefore, if you choose to use Chat GPT for assignments, please be sure to revise the content for clarity, conciseness, and audience awareness. Chat GPT is simply a tool and should not be used as a way to produce first and only drafts. Every assignment submission will be graded using the rubric provided in the syllabus. Be aware that Chat GPT may not develop high-quality work that earns a passing grade. It is your responsibility to review and revise all work before submitting to the instructor.” -Leah Schell-Barber for a Business Communications Course
“Use of Generative AI, such as ChatGPT and Microsoft Bing-Chat, must maintain the highest standards of academic integrity and adhere to the OU Code of Student Conduct. The use of Generative AI should be seen as a tool to enhance academic research, not as a replacement for critical thinking and originality in assignments. Students are not permitted to submit assignments that have been fully or partially generated by AI unless explicitly stated in the assignment instructions. All work submitted must be the original work of the student. Any ideas garnered from Generative AI research must be acknowledged with proper in-text citation and reference. Students may be asked to save the AI chat as a PDF file for verification.” -Ohio University College of Business Generative AI Use for Academic Work Policy
“‘The policy of this class is that you must be the creator of all work you submit for a grade. The use of others’ work, or the use of intelligent agents, chat bots, or a.i. engines to create your work is a violation of this policy and will be addressed as per MSU and Broad College codes of conduct.’ - Jeremy Van Hof… Or, you might consider this, which I asked ChatGPT to write for me: ‘Sample Policy Language: Students should not use ChatGPT to complete course assignments or for any other academic activities. ChatGPT should be used as a supplemental resource and should not replace traditional academic activities.’ (ChatGPT per Jeremy Van Hof’s prompting)
Or this much longer version, also written by ChatGPT: ‘The following course policy statement prohibits the use of Artificial Intelligence (AI) for the’ completion of assignments and activities during the duration of the course. At the Broad College, we strive to create an academic environment where learning is the foremost priority. We strongly believe that learning is best achieved through the hard work and dedication of our students. As such, we prohibit the use of Artificial Intelligence (AI) for the completion of assignments and activities during the course. Our policy is in line with our commitment to providing a fair and equitable learning environment for all students. We believe that AI should not be used to substitute human effort, as it defeats the purpose of our educational goals, which are to encourage critical thinking and problem-solving. We understand that AI can be a useful tool in many contexts, and we do not discourage its use in other courses. However, in this course, we will not accept assignments or activities that have been completed through the use of AI. We expect our students to be honest and to complete their work independently. We will be monitoring student work closely to ensure compliance with this policy. Violations of this policy will be met with disciplinary sanctions. All students are expected to adhere to this policy and to abide by the standards of the University.’ (ChatGPT per Jeremy Van Hof’s prompting)” -Jeremy Van Hof, Broad College of Business
“I study AI. I research it in my role as faculty in the Experience Architecture and Professional & Public Writing majors. And I don’t think it’s inherently bad or scary, in the same way that a calculator isn’t bad/scary for math. Artificial intelligence technologies such as ChatGPT can be an excellent starting point and a place to begin inquiry. But they are not a replacement for human thinking and learning. Robots lack empathy and nuance. As such, here is my policy:
You may use AI as a tool, but you may not use AI to replace your own beautiful brain. That means that you may ask ChatGPT, for example, to give you a list of bands similar to one that you hear and appreciate in this course. You may ask ChatGPT to give you an overview of a punk scene in a geographic location at a particular time. You may ask it for the history of punk rock and punk cultures. You may ask it what happened to Sid Vicious.
But you may not ask it to write on your behalf, and you must not turn in anything that has been written by ChatGPT and pass it off as your own for any assignment in this class, including discussion responses, papers, and exams. If you do so, I will know, and that will lead to an uncomfortable moment–and to you failing the assignment.
This is not meant to be punitive. It’s meant to reinforce how much I value you and your ideas and your intellect. In a face-to-face environment, we would have a lengthy conversation about AI, ethics, and human learning. If you want to have that conversation, I’m happy to do so via Zoom–email me!” -Kate Birdsall, asynchronous US23 course on punk-rock politics
Developing your Scholarly and Ethical Approaches to Generative AI
Taken, with slight modification, from “Update Your Course Syllabus for chatGPT” by Ryan Watkins, Professor of Educational Technology Leadership, and Human-Technology Collaboration at George Washington University in Washington DC (2022), via Medium.
Beyond Syllabi Language
Communicate your perspective about AI use. In addition to syllabus statements, consider talking with your students about AI tools like ChatGPT. Regardless of your orientation to generative AI use, it is important that you clearly communicate your expectations with the introduction of each assignment/assessment.
Different levels of familiarity: As an emerging technology, students will have differing levels of familiarity with these tools. For instance, while ChatGPT can write a grammatically correct paper or appear to solve a math problem, it may be unreliable and limited in scope. Discuss with students the uses and limitations of AI tools more broadly in addition to your perspective on their use in your class.
Connect to critical thinking skills: AI tools have many implications beyond the classroom. Consider talking with students about how to be engaged-consumers of AI content (e.g., how to identify trusted sources, reading critically, privacy concerns). Discuss how you and colleagues use AI in your own work.
Adapt assessments. AI tools are emerging and it can be incredibly difficult to make any assessment completely free from AI interference. Beyond a syllabus statement, you may also consider adapting your assessments to help reduce the usefulness of AI products. However before revising any assignment, it’s helpful to reflect on what exactly you want students to get out of the experience and share your expectations with your students. Is it just the end product, or does the process of creating the product play a significant role?
Create assessments that allow students to develop ideas over time. Depending on your class size, consider scaffolding assessments to be completed in small components (e.g., proposal, annotated bibliography, outline, first draft, revised drafts).
Ask students to connect their writing to specific course materials or current events. Students can draw from the course textbook, additional readings on Moodle or Blackboard, and even class discussion boards or in-class discussions.
Incorporate personal experiences and reflections. Provide students with opportunities to connect what they are learning to their own lives and experiences—stories unique to each individual.
Incorporate Multimedia Assessments. Consider developing or adapting assessments to include multimedia submissions (e.g., audio or video components). Also, consider peer-review and social annotation tools like Eli Review or Google Docs for students to use when responding to assigned readings or other materials.
Use class time. Ask students to complete writing assignments during class time (e.g. complete reading reflections at the beginning of class, or use exit tickets). Asking students to organize their ideas by writing during class may also support student engagement in other class activities such as discussions and group work.
Get Creative With Your Assignments: Visit “Update Your Course Syllabus for chatGPT” by Ryan Watkins (Medium article) for 10 ideas for creative assignments adapted for a classroom with chatGPT. You can mitigate the risk of students using chatGPT to cheat, and at the same time improve their knowledge and skills for appropriately using new AI technologies inside and outside the classroom.
Additional considerations to help you develop your generative AI philosophy (Watkins, 2022)
Expand your options. Consider your repertoire of instructional strategies. Atsusi Hirumi offers a guide to research-grounded strategies for any classroom. These are not, however, “a la carte” menus; you must use all of the steps of any strategy to gain the evidence-based benefits.
Reflect on your values. As Tyler Cowen pointed out, there will be those who gain and those that lose with the emergence of chatGPT and other generative AI tools. This is as true for students as it is for faculty and instructors. Be ready to openly discuss the ethical implications of generative AI tools with your students, along with the value of what you are teaching and why learning these are important to their futures.
Consider time. As discussed during Bryan Alexander’s webinar, chatGPT and other generative AI tools offer a short-cut to individuals who are short on time. Examine your course schedule to determine if you are unknowingly pushing students to take short-cuts. Some instructors try to cover too much content in their courses already.
Remember, AI is not human. Be careful not to anthropomorphize chatGPT and other generative AI tools. ChatGPT is a language model, and if we anthropomorphize these technologies, then it will be much harder to understand their promise and perils. Murray Shanahan suggests that we avoid statements such as, “chatGPT knows…”, or “ChatGPT thinks…”; instead, use “According to chatGPT…” or “ChatGPT’s output…”.
Again, AI is likely to be a part of your students’ life to some extent this semester, so plan accordingly. Critically considering your course design in the context of generative AI is an important educator practice. Following the Provost’s call, MSU instructors are encouraged to 1) develop a course-level generative AI use policy and actively discuss with students about expectations for generative AI use in the work for your class, 2) promote equitable and inclusive use of the technology, and 3) work with colleagues across campus to determine ethical and scholarly applications of generative AI for preparing students to succeed in an evolving digital landscape. MSU does not currently have a university-wide policy on AI in the classroom, so it is your responsibility as instructor to note and explain your individual course policy. A conversation with your department is highly recommended so that generative AI use in the classroom reflects that in the discipline.
References
This resource is collated from multiple sites, publications, and authors with some modification for MSU context and links to MSU specific resources. Educators should always defer to University policy and guidelines.
MSU Office of Student Support & Accountability Faculty Resources, including Academic Dishonesty Report form.
Watkins, R. (2022) Update Your Course Syllabus for chatGPT. Educational Technology Leadership, The George Washington University via Medium: https://medium.com/@rwatkins_7167/updating-your-course-syllabus-for-chatgpt-965f4b57b003
Center for the Advancement of Teaching (2023). Sample Syllabus Statements for the Use of AI Tools in Your Course. Temple University
Center for Teaching & Learning (2023) How Do I Consider the Impact of AI Tools like ChatGPT in My Courses?. University of Massachusetts Amherst. https://www.umass.edu/ctl/how-do-i-consider-impact-ai-tools-chatgpt-my-courses
Center for Teaching, Learning and Assessment (2023). AI, ChatGPT and Teaching and Learning. Ohio University. https://www.ohio.edu/center-teaching-learning/instructor-resources/chat-gpt
Office of Teaching, Learning, and Technology. (2023). Artificial Intelligence Tools and Teaching. Iowa University. https://teach.its.uiowa.edu/artificial-intelligence-tools-and-teaching
Center for New Designs in Learning and Scholarship (2023). Chat GPT and Artificial Intelligence Tools. Georgetown University. https://cndls.georgetown.edu/ai-composition-tools/#privacy-and-data-collection
Office for Faculty Excellence (2023). Practical Responses to ChatGPT. Montclair State University. https://www.montclair.edu/faculty-excellence/practical-responses-to-chat-gpt/
Teaching and Learning at Cleveland State University by Center for Faculty Excellence is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License
You can also access the Generative AI Syllabus Guide Playlist with this content broken down into the following sections. Table of Contents:
MSU Guidance and [Non]Permitted Uses
Developing and Communicating a Course-level Generative AI Use policy
Example Syllabus Statements for the Use of AI Tools in Your Course
Design For Generative AI (restrict, permit, require)
Design Around Generative AI (ban)
Example Statements from Current USA, Higher Education Educators
Developing your Scholarly and Ethical Approaches to Generative AI
Beyond Syllabi Language
Additional considerations to help you develop your generative AI philosophy (Watkins, 2022)
References
The following MSU-specifics should be used to inform your decisions...
Overall guidance: We collectively share the responsibility to uphold intellectual honesty and scholarly integrity. These are core principles that may be compromised by the misuse of GenAI tools, particularly when GenAI-generated content is presented as original, human-created work.
Permitted uses in Teaching & Learning: Instructors are expected to establish a course-specific guidance that defines the appropriate and inappropriate use of GenAI tools.
Students may only use GenAI tools to support their coursework in ways explicitly permitted by the instructor.
Non-permissible uses:
Do not Use GenAI to deliberately fabricate, falsify, impersonate, or mislead, unless explicitly approved for instruction or research in a controlled environment.
Do not Record or process sensitive, confidential, or regulated information withnon-MSU GenAI tools.
Do not Enter FERPA-protected student records, PII, PHI, financial, or HR data into unapproved tools; comply with MSU’s data policy and all regulations.
Do not Use export-controlled data or CUI with GenAI tools unless approved for MSU’s Regulated Research Enclave (RRE).
Developing and Communicating a Course-level Generative AI Use policy
A well-prepared course should be designed for ("restrict", "permit" or "require") or designed around ("ban") generative AI. Courses designed for AI should detail the ways and degrees to which generative AI use will be incorporated into activities and assessments. Courses designed for AI may incorporate AI for some activities and not others and depending on course AI may be explicitly excluded or included at different stages. Courses designed around AI may discuss impacts of generative AI as a topic but expectations are that students will not use these types of tools, and the course should be intentionally designed such that the use of generative AI would either not be conducive to the completion of assessments and activities, or such that the attempt to do so would prove overly cumbersome.
Regardless of your approach, communicating your expectations and rationale to learners is imperative.
Set clear expectations. Be clear in your syllabus about your policies for when, where, and how students should be using generative AI tools, and how to appropriately acknowledge (e.g., cite, reference) when they do use generative AI tools. If you are requiring students to use generative AI tools, these expectations should also be communicated in the syllabus and if students are incurring costs, these should be detailed in the course description on the Registrar’s website.
Regardless of your approach, you might include time for ethics discussions. Add time into your course to discuss the ethical implications of chatGPT and forthcoming AI systems. Talk with students about the ethics of using generative AI tools in your course, at your university, and within your discipline or profession. Don’t be afraid to discuss the gray areas where we do not yet have clear guidance or answers; gray areas are often the places where learning becomes most engaging.
Example Syllabus Statements for the Use of AI Tools in Your Course
There is no “one size fits all policy” for AI uses in higher education. Much like attendance/participation policies, GenAI course-level rules and statements will be determined by individual instructors, departments, and programs. The following resource is provided to assist you in developing coherent policies on the use of generative AI tools in your course, within MSU's guideline. Please adjust these examples to fit your particular context. Remember communication of your course generative AI policies should not only be listed in your syllabus, but also explicitly included in assignment descriptions where AI use is allowed or disallowed.
It is your responsibility as instructor to note and explain your individual course-level rule. A conversation with your department is highly recommended so that generative AI use in the classroom reflects broader use in the unit and discipline. If you have specific questions about writing your course rules, please reach out to the Center for Teaching and Learning Innovation.
Design For Generative AI
Restrict [This syllabus statement is useful when you are allowing the use of AI tools for certain purposes, but not for others. Adjust this statement to reflect your particular parameters of acceptable use. The following is an example.]
Example1:
The use of generative AI tools (e.g. ChatGPT, Dall-e, etc.) is permitted in this course for the following activities:
[insert permitted your course activities here*]
The use of generative AI tools is not permitted in this course for the following activities:
[insert not permitted your course activities here*]
You are responsible for the information you submit based on an AI query (for instance, that it does not violate intellectual property laws, or contain misinformation or unethical content). Your use of AI tools must be properly documented and cited in order to stay within university policies on academic integrity and the Spartan Code of Honor Academic Pledge.
Example2: Taken, with slight modification, from Temple University’s Center for the Advancement of Teaching to demonstrate the kinds of permitted/restricted activity an instructor could denote.
The use of generative AI tools (e.g. ChatGPT, Dall-e, etc.) is permitted in this course for the following activities:
Brainstorming and refining your ideas;
Fine tuning your research questions;
Finding information on your topic;
Drafting an outline to organize your thoughts; and
Checking grammar and style.
The use of generative AI tools is not permitted in this course for the following activities:
Impersonating you in classroom contexts, such as by using the tool to compose discussion board prompts assigned to you or content that you put into a Zoom chat.
Completing group work that your group has assigned to you, unless it is mutually agreed within your group and in alignment with course policy that you may utilize the tool.
Writing a draft of a writing assignment.
Writing entire sentences, paragraphs or papers to complete class assignments.
You are responsible for the information you submit based on an AI query (for instance, that it does not violate intellectual property laws, or contain misinformation or unethical content). Your use of AI tools must be properly documented and cited in order to stay within university policies on academic integrity and the Spartan Code of Honor Academic Pledge. For example, [Insert citation style for your discipline. See these resources for APA guidance, and for other citation formats.]. Any assignment that is found to have used generative AI tools in unauthorized ways [insert the penalty here*]. When in doubt about permitted usage, please ask for clarification.
Use permitted [This syllabus statement is useful when you are allowing, and perhaps encouraging, broad use of generative AI tools. Adjust this statement to reflect your particular parameters of acceptable use in your course. The following is an example.]
Example:
You are welcome to use generative AI tools (e.g. ChatGPT, Dall-e, etc.) in this class as doing so aligns with the course learning goal [insert the course learning goal use of AI aligns with here*]. You are responsible for the information you submit based on an AI query (for instance, that it does not violate intellectual property laws, or contain misinformation or unethical content). Your use of AI tools must be properly documented and cited in order to stay within university policies on academic integrity and the Spartan Code of Honor Academic Pledge.
Use required [This syllabus statement is useful when you have certain assignments that will require that students use generative AI tools. Adjust this statement to reflect your particular parameters of acceptable use. The following is an example.]
Example:
You will be expected to use generative AI tools (e.g. ChatGPT, Dall-e, etc.) in this class as doing so aligns with the course learning goal [insert the course learning goal use of AI aligns with]. Our class will make use of the [insert name of tool(s) here*] tool, and you can gain access to it by [insert instructions for accessing tool(s) here*]. You are responsible for the information you submit based on an AI query (for instance, that it does not violate intellectual property laws, or contain misinformation or unethical content). Your use of AI tools must be properly documented and cited in order to stay within university policies on academic integrity and the Spartan Code of Honor Academic Pledge.
Design Around Generative AI
Ban [This syllabus statement is useful when you are forbidding all use of generative AI tools for any purpose in your class. Adjust this statement to reflect your particular parameters of acceptable use. The following is an example.]
The use of generative AI tools (such as ChatGPT, DALL-E, etc.) is not permitted in this class; therefore, any use of AI tools for work in this class may be considered a violation of Michigan State University’s policy on academic integrity, the Spartan Code of Honor Academic Pledge andStudent Rights and Responsibilities, since the work is not your own. The use of unauthorized AI tools will result in [insert the penalty here*].
CONCERN: The ubiquity of generative AI tools, including their integration into Google search results and MS Office products, means that an outright generative AI ban is implausible for any activity that makes use of the Internet or MS Office Suite.
* It is highly recommended that you have conversations in your department about the appropriate penalties for unauthorized use of an AI. It is important to think about the appropriate level of penalty for first-time offenders and those who repeatedly violate your policies on the use of AI.
Example Statements from Current USA, Higher Education Educators
This collection of example statements are a compilation from a variety of sources including Faculty Learning Community (FLC) at Cleveland State University, Ohio University’s AI, ChatGPT and Teaching and Learning, and some of Michigan State University’s own educators! (If you have an example generative AI policy from your course that you’d be willing to share, please add it to the comments below or e-mail it to MSU Center for Teaching and Learning Innovation at teaching@msu.edu) NOTE: making your own course-level determination of "ban", "restrict", "permit", or "require" and using the sample language is the best, first place to start!
“AI (artificial intelligence) resources such as ChatGPT can be useful in a number of ways. Because it can also be abused, however, you are required to acknowledge use of AI in any work you submit for class. Text directly copied from AI sites must be treated as any other direct quote and properly cited. Other uses of AI must be clearly described at the end of your assignment.” -Claire Hughes-Lynch
“While AI tools can be useful for completing assignments and detecting plagiarism, it is important to use them responsibly and ethically. Practice based on these guidelines as a future or current K-12 teacher. The following are some guidelines for what not to do when using AI in your assignments and for plagiarism detection:
Do not rely solely on AI tools to complete assignments. It is important to understand the material and complete assignments on your own, using AI tools as a supplement rather than a replacement for your own work.
Do not use AI tools to plagiarize*. Using AI to generate or modify content to evade plagiarism detection is unethical and violates academic integrity.
Do not assume that AI responses are always correct. It has been noted that AI can generate fake results.* Please see the plagiarism/academic integrity policy in the course syllabus.” -Selma Koc
“Intellectual honesty is vital to an academic community and for my fair evaluation of your work. All work submitted in this course must be your own, completed in accordance with the University’s academic regulations. Use of AI tools, including ChatGPT, is permitted in this course. Nevertheless, you are only encouraged to use AI tools to help brainstorm assignments or projects or to revise existing work you have written. It is solely your responsibility to make all submitted work your own, maintain academic integrity, and avoid any type of plagiarism. Be aware that the accuracy or quality of AI generated content may not meet the standards of this course, even if you only incorporate such content partially and after substantial paraphrasing, modification and/or editing. Also keep in mind that AI generated content may not provide appropriate or clear attribution to the author(s) of the original sources, while most written assignments in this course require you to find and incorporate highly relevant peer-reviewed scholarly publications following guidelines in the latest publication manual of the APA. Lastly, as your instructor, I reserve the right to use various plagiarism checking tools in evaluating your work, including those screening for AI-generated content, and impose consequences accordingly.” -Xiongyi Liu
“If you are ever unsure about whether collaboration with others, including using artificial intelligence, is allowed or not, please ask me right away. For the labs, although you may discuss them in groups (and try using AI), you must all create your own code, output and answers. Quizzes will be done in class and must be solely your own work. You alone are always responsible for the correctness of the final answers and assignments you submit.” - Emily Rauschert on AI as collaboration partner
“Chat GPT: The use of Chat GTP is neither encouraged nor prohibited from use on assignments for GAD 250. Chat GPT is quickly becoming a communication tool in most business settings. Therefore, if you choose to use Chat GPT for assignments, please be sure to revise the content for clarity, conciseness, and audience awareness. Chat GPT is simply a tool and should not be used as a way to produce first and only drafts. Every assignment submission will be graded using the rubric provided in the syllabus. Be aware that Chat GPT may not develop high-quality work that earns a passing grade. It is your responsibility to review and revise all work before submitting to the instructor.” -Leah Schell-Barber for a Business Communications Course
“Use of Generative AI, such as ChatGPT and Microsoft Bing-Chat, must maintain the highest standards of academic integrity and adhere to the OU Code of Student Conduct. The use of Generative AI should be seen as a tool to enhance academic research, not as a replacement for critical thinking and originality in assignments. Students are not permitted to submit assignments that have been fully or partially generated by AI unless explicitly stated in the assignment instructions. All work submitted must be the original work of the student. Any ideas garnered from Generative AI research must be acknowledged with proper in-text citation and reference. Students may be asked to save the AI chat as a PDF file for verification.” -Ohio University College of Business Generative AI Use for Academic Work Policy
“‘The policy of this class is that you must be the creator of all work you submit for a grade. The use of others’ work, or the use of intelligent agents, chat bots, or a.i. engines to create your work is a violation of this policy and will be addressed as per MSU and Broad College codes of conduct.’ - Jeremy Van Hof… Or, you might consider this, which I asked ChatGPT to write for me: ‘Sample Policy Language: Students should not use ChatGPT to complete course assignments or for any other academic activities. ChatGPT should be used as a supplemental resource and should not replace traditional academic activities.’ (ChatGPT per Jeremy Van Hof’s prompting)
Or this much longer version, also written by ChatGPT: ‘The following course policy statement prohibits the use of Artificial Intelligence (AI) for the’ completion of assignments and activities during the duration of the course. At the Broad College, we strive to create an academic environment where learning is the foremost priority. We strongly believe that learning is best achieved through the hard work and dedication of our students. As such, we prohibit the use of Artificial Intelligence (AI) for the completion of assignments and activities during the course. Our policy is in line with our commitment to providing a fair and equitable learning environment for all students. We believe that AI should not be used to substitute human effort, as it defeats the purpose of our educational goals, which are to encourage critical thinking and problem-solving. We understand that AI can be a useful tool in many contexts, and we do not discourage its use in other courses. However, in this course, we will not accept assignments or activities that have been completed through the use of AI. We expect our students to be honest and to complete their work independently. We will be monitoring student work closely to ensure compliance with this policy. Violations of this policy will be met with disciplinary sanctions. All students are expected to adhere to this policy and to abide by the standards of the University.’ (ChatGPT per Jeremy Van Hof’s prompting)” -Jeremy Van Hof, Broad College of Business
“I study AI. I research it in my role as faculty in the Experience Architecture and Professional & Public Writing majors. And I don’t think it’s inherently bad or scary, in the same way that a calculator isn’t bad/scary for math. Artificial intelligence technologies such as ChatGPT can be an excellent starting point and a place to begin inquiry. But they are not a replacement for human thinking and learning. Robots lack empathy and nuance. As such, here is my policy:
You may use AI as a tool, but you may not use AI to replace your own beautiful brain. That means that you may ask ChatGPT, for example, to give you a list of bands similar to one that you hear and appreciate in this course. You may ask ChatGPT to give you an overview of a punk scene in a geographic location at a particular time. You may ask it for the history of punk rock and punk cultures. You may ask it what happened to Sid Vicious.
But you may not ask it to write on your behalf, and you must not turn in anything that has been written by ChatGPT and pass it off as your own for any assignment in this class, including discussion responses, papers, and exams. If you do so, I will know, and that will lead to an uncomfortable moment–and to you failing the assignment.
This is not meant to be punitive. It’s meant to reinforce how much I value you and your ideas and your intellect. In a face-to-face environment, we would have a lengthy conversation about AI, ethics, and human learning. If you want to have that conversation, I’m happy to do so via Zoom–email me!” -Kate Birdsall, asynchronous US23 course on punk-rock politics
Developing your Scholarly and Ethical Approaches to Generative AI
Taken, with slight modification, from “Update Your Course Syllabus for chatGPT” by Ryan Watkins, Professor of Educational Technology Leadership, and Human-Technology Collaboration at George Washington University in Washington DC (2022), via Medium.
Beyond Syllabi Language
Communicate your perspective about AI use. In addition to syllabus statements, consider talking with your students about AI tools like ChatGPT. Regardless of your orientation to generative AI use, it is important that you clearly communicate your expectations with the introduction of each assignment/assessment.
Different levels of familiarity: As an emerging technology, students will have differing levels of familiarity with these tools. For instance, while ChatGPT can write a grammatically correct paper or appear to solve a math problem, it may be unreliable and limited in scope. Discuss with students the uses and limitations of AI tools more broadly in addition to your perspective on their use in your class.
Connect to critical thinking skills: AI tools have many implications beyond the classroom. Consider talking with students about how to be engaged-consumers of AI content (e.g., how to identify trusted sources, reading critically, privacy concerns). Discuss how you and colleagues use AI in your own work.
Adapt assessments. AI tools are emerging and it can be incredibly difficult to make any assessment completely free from AI interference. Beyond a syllabus statement, you may also consider adapting your assessments to help reduce the usefulness of AI products. However before revising any assignment, it’s helpful to reflect on what exactly you want students to get out of the experience and share your expectations with your students. Is it just the end product, or does the process of creating the product play a significant role?
Create assessments that allow students to develop ideas over time. Depending on your class size, consider scaffolding assessments to be completed in small components (e.g., proposal, annotated bibliography, outline, first draft, revised drafts).
Ask students to connect their writing to specific course materials or current events. Students can draw from the course textbook, additional readings on Moodle or Blackboard, and even class discussion boards or in-class discussions.
Incorporate personal experiences and reflections. Provide students with opportunities to connect what they are learning to their own lives and experiences—stories unique to each individual.
Incorporate Multimedia Assessments. Consider developing or adapting assessments to include multimedia submissions (e.g., audio or video components). Also, consider peer-review and social annotation tools like Eli Review or Google Docs for students to use when responding to assigned readings or other materials.
Use class time. Ask students to complete writing assignments during class time (e.g. complete reading reflections at the beginning of class, or use exit tickets). Asking students to organize their ideas by writing during class may also support student engagement in other class activities such as discussions and group work.
Get Creative With Your Assignments: Visit “Update Your Course Syllabus for chatGPT” by Ryan Watkins (Medium article) for 10 ideas for creative assignments adapted for a classroom with chatGPT. You can mitigate the risk of students using chatGPT to cheat, and at the same time improve their knowledge and skills for appropriately using new AI technologies inside and outside the classroom.
Additional considerations to help you develop your generative AI philosophy (Watkins, 2022)
Expand your options. Consider your repertoire of instructional strategies. Atsusi Hirumi offers a guide to research-grounded strategies for any classroom. These are not, however, “a la carte” menus; you must use all of the steps of any strategy to gain the evidence-based benefits.
Reflect on your values. As Tyler Cowen pointed out, there will be those who gain and those that lose with the emergence of chatGPT and other generative AI tools. This is as true for students as it is for faculty and instructors. Be ready to openly discuss the ethical implications of generative AI tools with your students, along with the value of what you are teaching and why learning these are important to their futures.
Consider time. As discussed during Bryan Alexander’s webinar, chatGPT and other generative AI tools offer a short-cut to individuals who are short on time. Examine your course schedule to determine if you are unknowingly pushing students to take short-cuts. Some instructors try to cover too much content in their courses already.
Remember, AI is not human. Be careful not to anthropomorphize chatGPT and other generative AI tools. ChatGPT is a language model, and if we anthropomorphize these technologies, then it will be much harder to understand their promise and perils. Murray Shanahan suggests that we avoid statements such as, “chatGPT knows…”, or “ChatGPT thinks…”; instead, use “According to chatGPT…” or “ChatGPT’s output…”.
Again, AI is likely to be a part of your students’ life to some extent this semester, so plan accordingly. Critically considering your course design in the context of generative AI is an important educator practice. Following the Provost’s call, MSU instructors are encouraged to 1) develop a course-level generative AI use policy and actively discuss with students about expectations for generative AI use in the work for your class, 2) promote equitable and inclusive use of the technology, and 3) work with colleagues across campus to determine ethical and scholarly applications of generative AI for preparing students to succeed in an evolving digital landscape. MSU does not currently have a university-wide policy on AI in the classroom, so it is your responsibility as instructor to note and explain your individual course policy. A conversation with your department is highly recommended so that generative AI use in the classroom reflects that in the discipline.
References
This resource is collated from multiple sites, publications, and authors with some modification for MSU context and links to MSU specific resources. Educators should always defer to University policy and guidelines.
MSU Office of Student Support & Accountability Faculty Resources, including Academic Dishonesty Report form.
Watkins, R. (2022) Update Your Course Syllabus for chatGPT. Educational Technology Leadership, The George Washington University via Medium: https://medium.com/@rwatkins_7167/updating-your-course-syllabus-for-chatgpt-965f4b57b003
Center for the Advancement of Teaching (2023). Sample Syllabus Statements for the Use of AI Tools in Your Course. Temple University
Center for Teaching & Learning (2023) How Do I Consider the Impact of AI Tools like ChatGPT in My Courses?. University of Massachusetts Amherst. https://www.umass.edu/ctl/how-do-i-consider-impact-ai-tools-chatgpt-my-courses
Center for Teaching, Learning and Assessment (2023). AI, ChatGPT and Teaching and Learning. Ohio University. https://www.ohio.edu/center-teaching-learning/instructor-resources/chat-gpt
Office of Teaching, Learning, and Technology. (2023). Artificial Intelligence Tools and Teaching. Iowa University. https://teach.its.uiowa.edu/artificial-intelligence-tools-and-teaching
Center for New Designs in Learning and Scholarship (2023). Chat GPT and Artificial Intelligence Tools. Georgetown University. https://cndls.georgetown.edu/ai-composition-tools/#privacy-and-data-collection
Office for Faculty Excellence (2023). Practical Responses to ChatGPT. Montclair State University. https://www.montclair.edu/faculty-excellence/practical-responses-to-chat-gpt/
Teaching and Learning at Cleveland State University by Center for Faculty Excellence is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License
Posted by: Makena Neal
Pedagogical Design
Posted on: GenAI & Education
From The Chalkbeat https://www.chalkbeat.org/newark/2024/06/20/department-of-education-artificial-intelligence-resources-to-help-educators-schools/
"As part of Gov. Phil Murphy’s call to create an “artificial intelligence moonshot” in New Jersey, the state’s department of education unveiled a set of resources last week aimed at helping educators understand, implement, and manage artificial intelligence in schools...."
there is some useful stuff for higher ed educators and students on the NJ DOE's AI resource page: https://www.nj.gov/education/innovation/ai/
"As part of Gov. Phil Murphy’s call to create an “artificial intelligence moonshot” in New Jersey, the state’s department of education unveiled a set of resources last week aimed at helping educators understand, implement, and manage artificial intelligence in schools...."
there is some useful stuff for higher ed educators and students on the NJ DOE's AI resource page: https://www.nj.gov/education/innovation/ai/
Posted by: David V. Howe
Posted on: #iteachmsu
Here's a new webinar on how we might use AI in the classroom:
How Teachers Can Use AI Effectively -- https://www.youtube.com/watch?v=sTxndY4el38
How Teachers Can Use AI Effectively -- https://www.youtube.com/watch?v=sTxndY4el38
Posted by: Stokes Schwartz
Posted on: GenAI & Education
Massive changes have occurred recently with regard to artificial intelligence (AI) and the ability of the public to generate novel text and images using AI tools (e.g. ChatGPT). Many in education are concerned with what this means for assessing student understanding: if a student can generate a novel, accurate essay on almost any topic, how will you assess learning from short-answer and essay assignments?
On 02/01/2023, a campus collaboration of the APUE STEM+ Ed@State, Enhanced Digital Learning Initiative (EDLI), Center for Teaching and Learning Innovation (CTLI), MSU Libraries, and MSU IT EdTech, hosted the "Symposium on AI in Education and Academic Writing". During the symposium, the basics of how AI works were shared and attendees had opportunities to play with some AI tools. The event provided opportunities to hear how faculty are addressing these challenges, discuss concerns and opportunities with colleagues, and reflect on individual teaching philosophies in the time of artificial intelligence (AI).
On 02/01/2023, a campus collaboration of the APUE STEM+ Ed@State, Enhanced Digital Learning Initiative (EDLI), Center for Teaching and Learning Innovation (CTLI), MSU Libraries, and MSU IT EdTech, hosted the "Symposium on AI in Education and Academic Writing". During the symposium, the basics of how AI works were shared and attendees had opportunities to play with some AI tools. The event provided opportunities to hear how faculty are addressing these challenges, discuss concerns and opportunities with colleagues, and reflect on individual teaching philosophies in the time of artificial intelligence (AI).
Posted by: Makena Neal
Posted on: Power Automate/Apps...
Chad Randall shared this with me: a chatgpt that is designed for power automate/apps: https://checkmygpt.com/powerpacman
Posted by: David V. Howe
Posted on: GenAI & Education
This article outlines how ChatGPT can assist in grading with rubrics: https://blog.tcea.org/chatgpt-grading/ link
Posted by: Caitlin Kirby
Posted on: #iteachmsu
Generative AI for Student and Educator Support - These are the slides for the session on August 21, 2024 by CTLI staff.
https://www.canva.com/design/DAGM_uxIjNI/XzIMWwqtQicgdH0RmuBKNg/edit?utm_content=DAGM_uxIjNI&utm_campaign=designshare&utm_medium=link2&utm_source=sharebutton
If there are any questions or comments, please contact CTLI
https://www.canva.com/design/DAGM_uxIjNI/XzIMWwqtQicgdH0RmuBKNg/edit?utm_content=DAGM_uxIjNI&utm_campaign=designshare&utm_medium=link2&utm_source=sharebutton
If there are any questions or comments, please contact CTLI
Posted by: Jay Loftus
Posted on: #iteachmsu
Art and Artificial Intelligence - A conversation, Thurs Mar 23, 5-6:30, MSU Museum
A Conversation on March 23 with Bill Hart-Davidson, Anjana Susarla, and Mark Sullivan
https://www.instagram.com/p/CpvlQxTpMKg/?utm_source=ig_web_button_share_sheet&fbclid=IwAR26MbdapkvCAW_s-5j09NJi1qt2cuGpfvctQHemnNj5FTeb8zy6zhM-GSk
A Conversation on March 23 with Bill Hart-Davidson, Anjana Susarla, and Mark Sullivan
https://www.instagram.com/p/CpvlQxTpMKg/?utm_source=ig_web_button_share_sheet&fbclid=IwAR26MbdapkvCAW_s-5j09NJi1qt2cuGpfvctQHemnNj5FTeb8zy6zhM-GSk
Posted by: Mark Sullivan