We found 221 results that contain "open badging"

Posted on: #iteachmsu
Monday, Oct 12, 2020
Concept Map Journal 3-7: Assignment example
One of the possible ways to have students work on a course project throughout a semester is to have periodic concept map assignments. Journals were framed as homework assignments to be done individually. The purpose of these assignments was to help students prepare for their final project in ISB202.
 
Overview of Journal Assignments #3-7 (What is the purpose of these journal assignments and what is the format?): In the remaining journal assignments, students will continue their investigation of their candidate species. You have considered general arguments for and against de-extinction; that is, does de-extinction seem like a good idea overall, at least on paper? Now, let’s ask what this process might look like in practice. To do so you will critically evaluate whether your species is a candidate for de-extinction. It stands to reason that some species may be better candidates for de-extinction than others. For example, we may know more about the biology for some species than others, appropriate habitat for release may be available for some species and not others, or the reason(s) for extinction may not be present any longer for some species but not others. We need to make decisions based on evidence regarding which species might be promoted as candidates and for which species it may never work.
 
An important point here is that you should evaluate species in their own right and not from the lens of your opinion about de-extinction; that means you need to use empirical evidence. If your position (after completing your first journal assignment) is that de-extinction will be an important tool as a conservation strategy, you should be careful that you step away from the position and use evidence to evaluate your candidate species. If you do this, then you are equally likely to conclude that your species is a candidate for de-extinction as you are to conclude it is not. The same is true if you oppose the idea; be open to the idea that your review of your candidate species might reveal that it just may be an excellent candidate. This is critical for this project, but also critical in the process of science. Scientists must weigh the evidence, be aware of any personal biases they may have, and do everything possible to limit personal opinions from swaying their interpretation.
 
The remaining journal assignments will be created using a concept map format, similar to the first journal assignment. Similarly, there will be three “levels” to the concept map. For a full description of this assignment, including descriptions of the levels of mapping, and overview of concept mapping generally, as well as the grading rubric, check out the attachment. You can also check out full descriptions of the reflective journal assignment and the annotated reference list journal assignment to gain the whole picture.
Authored by: Andrea Bierema
post image
Posted on: #iteachmsu
Wednesday, Mar 3, 2021
What is formative feedback? (and why we should care)
Formative feedback is information on our thinking or our performance that gives us time to reflect and act on that feedback. Feedback is descriptive, evaluative, and suggestive. That is, good feedback shows us what we are doing, provides some sense of how we are doing relative to our goals, and provides some suggestions for how we might improve. Having said this, simple descriptive feedback can be quite powerful.
Processing feedback requires reflection. There is immense value in regular reflective practice regardless of your role or responsibilities. Taking time to critically examine how our experiences align with our expectations creates opportunities for us to identify opportunities for learning. Engaging in reflection as an iterative practice creates a norm of growth and improvement. 
Summative evaluations of our teaching at the conclusion of each semester play a role in our institutional accountability. We can certainly learn from end-of-semester feedback and many educators do. However, if this is the only opportunity for students to provide course feedback, it comes at a time when they themselves are past the point of benefiting from it. 
Formative, mid-semester feedback, however, creates an opportunity for educators to engage learners in the process of reflective practice. Intentional reflection through mid-semester feedback can help explore the initial assumptions made about a class, gain insights from learners, and develop a more comprehensive awareness of teaching practice. Generally, because the knowledge gained through this process of reflection happens with students who have a stake in the course, this reflective practice strengthens teaching practice. Finally, it is important to note as our colleagues at Vanderbilt’s Center for Teaching have noted, “soliciting mid-semester feedback can improve our end-of-course evaluations, as it will both improve the quality of the course itself and provide students with early opportunities to raise concerns with the course.”
Finally, it is essential to note that mid-semester feedback is provided in confidentiality by students. Survey administrators will tabulate and send data to you. No one else will see or have access to the information collected on your course. 
 
Adapted from the Enhanced Digital Learning Initiative at MSU: Scott Schopieray (CAL), Stephen Thomas (Nat. Sci.) Sarah Wellman (CAL & Broad), Jeremy Van Hof (Broad).
source: Finlay, Linda (2008). Reflecting on ‘Reflective practice’. Practice-based Professional Learning Paper 52, The Open University. 
Posted by: Makena Neal
post image
Posted on: #iteachmsu
Wednesday, Apr 28, 2021
TLC During a Pandemic: Graduate TAs Build a Community Through Sharing Teaching Practices
Topic Area: Pandemic Pivot
Presented by: Stefanie Baier, Hima Rawal, Seth Hunt and Rosanne Renauer
Abstract:
When the pandemic hit unexpectedly, it disrupted some of our personal and professional connections, and the unexpected transition to remote teaching and learning changed some of our learning environments. However, not having to convene in certain physical locations offered new opportunities to establish and cultivate connections on an expansive level through a myriad of virtual spaces.
In this presentation, we share one such virtual community, the GTA TLC (Teaching Learning Community), which has afforded GTAs the opportunity to connect across disciplines and physical distance to share best practices, tools, technologies, methods, and ways of effective teaching. The GTA TLC has formed a community that meets bi-weekly and opens the door to whoever wants to connect with their fellow GTAs and any teaching enthusiasts to share, discuss, and reflect on innovative ideas around instruction. Not only has this space leveraged knowledge building but also created friendships across time zones, spaces, disciplines, and diverse backgrounds. The emphasis has been placed on the co-construction of knowledge while enhancing student engagement and student success in different settings from the lens of multiple GTA roles
In this information session, we will showcase excerpts of the best attended sessions including culturally responsive pedagogy, accessibility and technology tools for student engagement, feedback data from participants, and testimonials about the impact and growth of this community. All of these goals have been nested within our overarching theme of well-being that nurtures our GTA TLC, thereby elevating their awareness of students’ needs and holistic well-being.
Session Resources:
TLC During a Pandemic_Stefanie Baier.pdf
Authored by: Stefanie Baier, Hima Rawal, Seth Hunt, Rosanne Renauer
post image
Posted on: The MSU Graduate Leadership Institute
Monday, Oct 11, 2021
Graduate Student Safety Week
 Brief Project Description I conducted a field safety week for graduate students in the College of Natural Resources. Students come into the department with different levels of comfort in forested environments, and this training was designed to serve as a risk mitigation intervention and to give all students the basic skills they need to conduct fieldwork safely. The trainings included: 
 Field Safety Week Schedule 
June 21-25   
Monday, June 21: Auto safety day  
-Afternoon session: Hitching and backing a trailer w/ Bas Kimlinger, Michigan State Capital Facilities Team Member 
-Evening session: Safety concerns for field researchers from underrepresented backgrounds, CANR DEI Office 
  
Tuesday, June 22: Know before you go day  
-Morning session: Overview of field safety equipment w/ Alex White, MSU Forestry PhD student 
-Afternoon session: Snakes of Michigan w/ Brianna Mims, M.S., Ecology and Evolutionary Biology  
  
Wednesday, June 23: Wilderness First Aid Certification  
-All day w/ Great Lakes EMS Academy 
  
Thursday, June 24: Wilderness First Aid Certification  
-All day w/ Great Lakes EMS Academy 
  
Friday, June 25: In case of emergency day  
-Morning session: Land navigation w/ Kasey Wilson, MSU GIS Professional 
-Afternoon session: Self-defense w/ Officer Melvin, Lansing PD 
 
 
Reflection
Who did you work with (college administrators, key stakeholders, offices on campus, etc.) and what was their role in your project? 
I worked with Lauren Noel, Kelly Millenbaugh, and Laura Bix. Lauren supported me with logistics and registration. She helped me reserve a meeting room for the event and built a registration page. Kelly Millenbaugh committed funds to pay for the wilderness first aid training. Laura Bix supported project development. We’re still working together to make safety week happen again next summer. I also worked with the CANR Diversity Office to develop a training on the unique safety concerns faced by researchers from underrepresented backgrounds. I worked with individual presenters to develop and schedule sessions that met the needs of graduate student researchers. 
How did you manage relationships with key stakeholders in your college to achieve your project goals? 
I managed the relationships by setting up regular meetings on zoom and seeking input from these stakeholders. I kept them updated as I developed the project. 
What has been the impact of your project? Who was your target audience and what difference did your project make for them?  
My target audience was graduate students in natural resources. My project made them feel safer in the field. The students I spoke to did not have any formal safety training and were grateful for the opportunity. In my post event survey, student expressed an interest in participating in the training again. I hope that the training will help reduce injuries in the field. I intend to continue working on this project to develop more of a culture of field safety in the department. 
If someone were to continue your work in the future, what advice would you have for them?  
I would suggest that the person meet in person with students to get input on the types of training that they need. I had a zoom meeting with students but we couldn’t meet in person because of COVID. I would also suggests using the great resources that the university has and involving faculty and staff from the college. Everyone I worked with was excited to help and was very supportive of the project. With campus being open again, I think there are more opportunities to involve additional stakeholders. I would also suggest working on everything as early as possible. That helps to reduce stress and make the event go smoothly. 
How did this work contribute to your personal leadership development? 
This project gave me a chance to work with fellow graduate students to create a program that was helpful to them. I sought their input on the types of programming that they needed to be safer in the field. Doing this project involved working with peers, faculty members, and administrators. Working with faculty and staff gave me a chance to present ideas for improving the department. That was an important aspect, because I wanted to be respectful and gracious while also suggesting changes.  
Authored by: Alex White
post image
Posted on: IT - Educational Technology
Monday, Apr 21, 2025
MSU Educational Technology Summit
MSU Educational Technology Summit
June 2 - 5, 2025
This event aims to familiarize instructors, staff, and students with campus technologies supporting teaching and learning. This year, there will be workshops and presentations on useful tips and instructor experiences using technology tools and campus resources such as: Camtasia, Packback, Eli Review, 3D Printing, Quality Matters, Crowdmark, Knowledge Commons, Online Whiteboards, Virtual Reality, D2L, AI, OneNote Class, Spartan A11y, Stable Links, Zoom, iClicker, Open Educational Resources (OER), and more!Interested in attending? You can enroll for the MSU Educational Technology Summit on D2L now! Zoom registration links for all virtual meetings are available by enrolling in the 2025 Educational Technology Summit on D2L. Please enroll for the D2L course and, once in the D2L course, register for each Zoom session you plan to attend. Descriptions of each session and topics covered are available on the D2L course as well. Or you can register to attend through LibCal.
Monday June 2, 2025
9:00 AM - 9:50 AM
AI Playground
The AI Playground is a virtual drop-in space where you can explore creative and practical ways to use AI in teaching and learning. No prep or pressure—just come as you are and see what’s possible!
Presented by MSU IT EdTech / EducationalTechnology@msu.edu
 
10:00 AM - 10:50 AMEli Review: Exploring How to Improve Feedback Across Multiple DisciplinesPresented by Casey McArdle / cmcardle@msu.edu WRAC11:00 AM - 11:50 AMInvestigating the Relationship of Quality Matters (QM) on Online Student Engagement in Higher EducationPresented by David Goodrich / daveg@msu.edu CTLI1:00 PM - 1:50 PMTransforming Clinical Training with Virtual Reality: A Campus-Wide OpportunityPresented by Andy Greger / gregeran@msu.edu College of Nursing2:00 PM - 2:50 PMTeaching with the OneNote Class Notebook and TeamsPresented by Kevin Markle / marklek2@broad.msu.edu Broad College - Accounting and Information Systems3:00 PM - 3:50 PMFrom Live to Library: Enabling Inclusive, Adaptive Learning through Zoom and MediaSpacePresented by Keith LeRoux & Stephen Elmer / lerouxk1@msu.edu, elmerst2@msu.edu Statewide Campus System College of Osteopathic Medicine
 
Tuesday June 3, 2025
9:00 AM - 9:50 AM
AI Playground
The AI Playground is a virtual drop-in space where you can explore creative and practical ways to use AI in teaching and learning. No prep or pressure—just come as you are and see what’s possible!
Presented by MSU IT EdTech / EducationalTechnology@msu.edu
 
10:00 AM - 10:50 AM3D Printing and Modeling for EveryonePresented by Amanda Tickner / atickner@msu.edu MSU Libraries Makerspace11:00 AM - 11:50 AMGetting Started with KCWorks, the New Knowledge Commons Repository!Presented by Larissa Babak / babaklar@msu.edu Knowledge Commons1:00 PM - 1:50 PMD2L Checklists: A Simple Tool to Improve Course NavigationPresented by Andrea Bierema / abierema@msu.edu Center for Integrative Studies in General Science & Dept. of Integrative Biology2:00 PM - 2:50 PMNeuroSupport GPT: Demonstrating the Design Process of a Custom GPT for Inclusive TeachingPresented by Min Zhuang / zhuangm2@msu.edu EDLI3:00 PM - 3:50 PMFrom Classroom to Text: Leveraging SMS for Real-Time Learning and Student SupportPresented by Christina Bridges & Julia Barnes / bridgec3@msu.edu & barne454@msu.edu Strategic Retention Unit, Office of Undergraduate Education
 
Wednesday June 4, 2025
9:00 AM - 9:50 AM
AI Playground
The AI Playground is a virtual drop-in space where you can explore creative and practical ways to use AI in teaching and learning. No prep or pressure—just come as you are and see what’s possible!
Presented by MSU IT EdTech / EducationalTechnology@msu.edu
 
10:00 AM - 10:50 AMGetting Started with Camtasia and SnagitPresented by Casey Seiter / c.seiter@techsmith.com TechSmith11:00 AM - 11:50 AMSimple Tools, Big Impact: Modifying Learning Environments with Simple Tech for Non-English SpeakersPresented by Allison Peterson & Cathy Lugibihl / peter382@msu.edu & lugibihl@msu.edu Student Life & Engagement, Human Resources - Talent Development Team1:00 PM - 1:50 PMRemoving Barriers to Learning: Improve D2L Course Usability and Accessibility with Ally/Presented by Kevin Henley & Sam Abele/ henley@msu.edu & abelesam@msu.edu IT-Educational Technology
2:00 PM - 2:50 PMCall Us Irresponsible: Designing Online Curricula for the AI EraPresented by Lisa Batchelder & Sarah Freye/ schulma7@msu.edu & freyesar@msu.edu IT-Educational Technology3:00 PM - 3:50 PMClassroom Engagement Made Simple: iClickerPresented by David Maltby / David.Maltby@Macmillan.com iClicker
 
Thursday June 5, 2025
9:00 AM - 9:50 AM
AI Playground
The AI Playground is a virtual drop-in space where you can explore creative and practical ways to use AI in teaching and learning. No prep or pressure—just come as you are and see what’s possible!
Presented by MSU IT EdTech / EducationalTechnology@msu.edu
 
10:00 AM - 10:50 AMUsing Technology Tools to Engage Students in Extensive ReadingPresented by Wenying Zhou / zhouweny@msu.edu Dept. of Linguistics, Languages, and Cultures11:00 AM - 11:50 AMOnline Whiteboards: Enhancing Teaching, Collaboration, and Community BuildingPresented by Ellie Louson & Makena Neal / lousonel@msu.edu & mneal@msu.edu CTLI & Lyman Briggs College1:00 PM - 1:50 PMAI in the ClassroomPresented by Koelling / jorie.koelling@packback.co Packback
2:00 PM - 2:50 PMStable Links for Electronic ResourcesPresented by Kristen Lee / leekrist@msu.edu MSU Libraries3:00 PM - 3:50 PMThe Best of Both Worlds: The Academic Security of In-Person Assessments and the Ease of Digital GradingPresented by Phillip C,. Delekta / delektap@msu.edu Dept. of Microbiology, Genetics, & Immunology
Posted by: Lindsay Tigue
post image
Posted on: GenAI & Education
Thursday, Jun 13, 2024
An Intro to Artificial Intelligence
The following are a collection of publicly sourced articles related to the recent rise of artificial intelligence in everyday applications. note: following each [hyperlinked] title/subtitle are abstracts generated by ChatGPT after receiving prompts from M. Neal. "Write an abstract for [article link]" is indicated with [URL] abstract. "Write an abstract for [article text]" is indicated with [text] abstract. These very similar prompt-responses are shared consecutively in the hope that reading the abstracts will showcase the variations in ChatGPT output (despite similar input). Big Tech was moving cautiously on AI. Then came ChatGPT.
Google, Facebook and Microsoft helped build the scaffolding of AI. Smaller companies are taking it to the masses, forcing Big Tech to react.
[URL] Abstract: "ChatGPT, the Large Language Model developed by OpenAI, has been acquired by Google for an undisclosed sum. The acquisition has been widely seen as a move to improve Google's AI capabilities and stay ahead of competitors in the field. The ChatGPT model will be integrated into various Google services and products, including search, voice assistants, and translation services. This acquisition highlights the growing importance of AI technology and the increasing demand for advanced language models like ChatGPT."
[text] Abstract: Meta's Blenderbot, a chatbot released by Facebook's parent company three months before OpenAI's ChatGPT, was deemed a failure due to its over-cautious content moderation. In contrast, ChatGPT's conversational style, which allows for discussion on controversial topics, has made it a mainstream success. As ChatGPT continues to grow in popularity, tech giants such as Meta and Google are feeling pressure to speed up their own AI processes, potentially disregarding safety concerns. This has sparked concerns among AI ethicists, who fear that big tech's rush to market may expose billions of people to potential harms before they have been fully studied. Despite these concerns, companies like OpenAI continue to release their AI tools to the public, viewing it as the only way to assess real-world harm. The fast pace of AI progress requires companies to prioritize making the right decisions and releasing AI models that best serve the community, while considering the broader societal impacts of these innovations.ChatGPT Is Dumber Than You Think
Treat it like a toy, not a tool.
Abstract: "The deployment of AI language models like ChatGPT, developed by OpenAI, in writing and journalism raises questions about the ethics of AI-generated content. ChatGPT is capable of producing human-like text, making it difficult to distinguish between AI-generated content and content written by a human. The ethical concerns include issues of accuracy, accountability, and bias in AI-generated writing, as well as the potential for AI systems to manipulate public opinion. The article highlights the need for transparent and responsible deployment of AI language models in journalism and writing, and the importance of considering the ethical implications of AI-generated content."
ChatGPT could be used for good, but like many other AI models, it's rife with racist and discriminatory bias
[URL] abstract: ChatGPT, a large language model developed by OpenAI, is prone to biases like many other AI models. Despite its impressive performance in generating human-like text, the model has been found to have biases based on the training data it was exposed to. These biases can be a result of various factors such as the demographic representation in the training data, cultural norms and values, and other systemic biases present in society. It is crucial for developers and users to be aware of these biases and take steps to mitigate them.
[text] abstract: ChatGPT, an advanced artificial intelligence chatbot developed by OpenAI, is at the forefront of the debate on the potential and dangers of AI. Although the chatbot can assist with writing and speaking tasks and provide entertainment, it has also been used to cheat, plagiarize, spread misinformation, and enable unethical business practices. Furthermore, like many chatbots, it has a history of being biased. OpenAI added guardrails to prevent problematic responses, but these were easily bypassed by users. As AI is increasingly used to make important decisions in various industries, such as healthcare and law enforcement, the issue of bias must be addressed to prevent amplifying existing inequalities. This problem is not unique to ChatGPT and is well-documented, with studies showing the negative effects of biased algorithms. The AI industry is divided on who should be held responsible for these biases and whether harmful models should be released. Despite these concerns, AI models are becoming more prevalent in our lives and researchers are urging caution on the potential for harm due to AI biases.
The US Copyright Office says an AI can’t copyright its art[text] abstract: The US Copyright Office has rejected a request to copyright a work of art created by an AI algorithm named Creativity Machine. A three-person board determined that the AI-generated image lacked the necessary "human authorship" required for protection under copyright law. The board emphasized the vital role of the human mind in creative expression, citing past court decisions which have consistently found that non-human expression is ineligible for copyright protection. This decision highlights the importance of human agency in machine-produced works and raises questions about the limits of AI in the realm of art and copyright.[Google] An important next step on our AI journey[URL] abstract: This blog post provides an overview of BARD (Bidirectional Attention Representation for Documents), a new AI-powered search system developed by Google. BARD leverages the latest advances in natural language processing and machine learning to deliver more relevant and accurate search results, even for complex queries. The system utilizes a bidirectional attention mechanism to better understand the context of each word in a document and make connections between words, which helps to improve the relevance of search results. With BARD, Google aims to provide users with a more intuitive and seamless search experience, enabling them to find what they're looking for more quickly and easily.[text] abstract: Google has opened up its experimental conversational AI service, Bard, to trusted testers ahead of wider availability. Bard aims to combine the world's knowledge with the power, intelligence and creativity of large language models, drawing on information from the web to provide high-quality responses. Bard will initially be released with a lightweight version of LaMDA, the company's language model for dialogue applications. The release will be used to gather feedback and improve quality and speed. AI-powered features in Google Search, which will distill complex information and multiple perspectives, will soon be rolled out. Google also plans to onboard individual developers and enterprises to build on top of its best AI models.
last updated on 02/06/2023
Authored by: Makena Neal & ChatGPT
post image
Posted on: #iteachmsu
Monday, Nov 16, 2020
Grading & Giving Feedback
Edit a Question During its Availability
Occasionally, a test question will need to be edited while an exam is in progress.
 

Quizzes – Manually Grade a Quiz - Instructor
Short answer questions, although auto-graded by D2L, should be double-checked for grading accuracy.
 

D2L Assessment Analytics
Examining quiz question statistics can help instructors determine if a question is too easy, too challenging, or needs editing for clarification.
The following is a quick guide for D2L Quiz and Grade Item statistics to help you monitor and improve your assessment questions and results.
D2L Quiz Statistics
To see how students performed overall on each of the quizzes, in your own course go to Assessments > Quizzes > Statistics (click on Statistics from the tab view across the top).

This list displays all of your course quiz averages.
Click on a quiz to see more details including User Stats, Question Stats, and Question Details.

Question Stats
The Question Stats list the Standard Deviation, Discrimination Index, and Point Biserial value for each question. 

You can click on the link, "What do the statistics on this page mean?" above the table in your course to learn more. The information is also copied below.
What do the statistics on this page mean?
All statistics are calculated based on each user’s first attempt on the quiz. If a question is changed after attempts have been made, only the attempts on the newest version of the question are included in the statistics (ie. First attempts made before a question was changed are not included in the statistics for that question).
STANDARD DEVIATION
The standard deviation indicates how much scores vary from the average, ranging from 0% to 100%. A high standard deviation indicates that scores are spread out from the average, whereas a low standard deviation indicates that scores are close to the average.
DISCRIMINATION INDEX
The discrimination index indicates how well a question differentiates between high and low performers. It can range from -100% to 100%, with high values indicating a “good” question, and low values indicating a “bad” question.
POINT BISERIAL CORRELATION COEFFICIENT
The point biserial correlation coefficient is an analysis only applied to multiple choice and true/false question types that have only one answer with weight 100%, and all others with weight 0%.
Similarly to the discrimination index, the point biserial correlation coefficient relates individuals’ quiz scores to whether or not they got a question correct. It ranges from -1.00 to 1.00, with high values indicating a “good” question, and low values indicating a “bad” question.
*Note that only first attempts are included in that question's statistics.
Question Details
This tab will show you the summary of student responses for each question. If you notice a low or negative value for the Point Biserial or Discrimination Index, you may want to investigate the question. It could indicate a badly worded question or improperly keyed question answer.


 

For more, view the video tutorial on Generating Reports in D2L Learning Environment opens in new window. Currently, the statistics do not display for random "pool item" question types. Contact the MSU Service Desk to check on obtaining reports through the Data Hub.


Grade Item Statistics
To view grade item stats, in your own course go to, Assessments > Grades > (Grade Item) View Statistics – Use the pull down menu by a grade item title and select Statistics to display Class and User Statistics. If you have a grade scheme setup to display, you will also see the grade distribution chart on the page.

Working with student data

Keep the MSU Institutional Data Policy opens in new window in mind when storing data and making reports public in order to protect the security and confidentiality of student data.
Read more about best practices for handling data at secureit.msu.edu/data opens in new window from MSU IT Services – Academic Technology.

Addressing Issues of Academic Misconduct
What should you do if you discover cheating in your course? Follow the link to find out more. 
What is an Academic Dishonesty Report
If you give a penalty grade as a result of academic misconduct, you must submit an Academic Dishonesty Report (ADR) to the university. See the link above as an example. 
Authored by: Casey Henley & Susan Halick
post image
Posted on: #iteachmsu
post image
Grading & Giving Feedback
Edit a Question During its Availability
Occasionally, a test questi...
Authored by:
Monday, Nov 16, 2020
Posted on: The MSU Graduate Leadership Institute
Monday, Oct 11, 2021
Strides Towards More Equitable Graduate Admissions
For my Graduate Leadership Fellow project, I worked with a network of instructors across multiple institutions to draft an article on crating more equitable graduate admissions in the biological sciences. I gave a current graduate student’s perspective in a team of primarily faculty members. We drafted a proposal for the article for submission to the American Naturalist’s call for special section papers on Nature, Data, and Power. This proposal pulled together existing literature on equitable admissions, as well as expertise from faculty working across public, regional, and private institutions, from R1 to primarily teaching. The focus of our article is to create a guide for use here at Michigan State University and other biological science departments to assesss and revise their own admissions process. 
 
Because our work is collaborative and multi-institutional, it has been slow-going. This article is still undergoing the proposal and drafting process after rejection from AmNat’s special section. We are currently seeking a home for the proposal and the article itself as we continue to work towards crafting this important guide. 
 
If someone were to continue this work in the future, it would be important to work together across the different departments in the university to discover which departments are open and interested in revising their admissions process in an evidence-based fashion. Because the article focuses on biological sciences broadly, and MSU’s biological sciences departments are rather siloed, this project would require advocates in multiple departments to collaborate and unify goals.  
 
As someone interested in equity and in serving in a teaching and advising role to undergraduate students at regional universities, this project was important for developing my understanding of the current state of graduate admissions as well as where the conversation around admissions is headed. I hope that I can use the collaborative skills and the knowledge gained from working with faculty and administration in order to better guide my future mentees towards their own goals, inside or outside the institution of academia.  
Authored by: Acacia Ackles
post image