First, I hope that this course is what its title promises: Assessment of Learning AND Giving Feedback. When doing the pre-class assignments, I read the word "assessment" many times and I miss the word "feedback". As a matter of fact (and luckily), I do not have to assess students very frequently (except from student admission, doctoral school courses, PhD follow-up groups, thesis review). I am therefore more interested in the giving feedback part, especially for the "hide in the crowd" and surface learners. And when it comes to assessment, the situation of PhD students in the doctoral school is typically one, where assessment is implicitly spread out thinly over the whole PhD period. E.g. PhD thesis committee meetings are supposed to some sort of assessment of progress, but how do students really deal with them and are they really helpful? Has anybody looked scientifically into this? Has theses quality increased with the introduction of obligatory thesis committees? Even though our study program has quite clear criteria for student performance (e.g. for thesis grading), the weakest link in the assessment chain is, in my opinion, us, the teachers. Human brains did not evolve to consistently apply the same algorithm to a problem. I have high hopes for AI taking over student grading, it can be only a step forward despite the massive problems with AI. For humans, what tastes good at one time might objectively taste bitter at another time due to objectively existing brain circuits, which are adjusted based on many factors, not most of them being out of our control. Even the same, objectively measurable properties (like heat, cold, sweet, bitter) can produce measurably different signals in the brain depending on the situation. E.g. under calory deprivation, the sensory signaling for bitter taste is downregulated (see e.g. https://www.nature.com/articles/s41467-019-12478-x). It is a bold statement if somebody claims to be able to objectively grade such much more intricate properties like critical thinking or creativity. What approaches are being used to minimize human error in such endeavors? I guess there are approaches, but is there good scientific evidence that such approaches do work in real life (where time is always lacking)? And do they work for all teachers as teachers are - like students - also very different from each other? I am teaching yearly at least one practical lab course ("protein purification"), which is arranged by the biomedical doctoral school (however, it is open also for BSc and MSc students). It's project-based and performed in groups of 3 students each (group size is not determined freely, but dictated by available instrumentation and time). The actual project tasks are potentially different for all students as they can and do provide their own samples. Assessment is very difficult because it would require almost as much time as the course itself. So far I only hope that the students "have learned something", i.e. that they are able to set up and perform successfully a protein purification. But on the other hand, proteins are so difficult that even experts generally try to avoid them. I myself fail with difficult proteins. Hence the craze about genes: genes are only the instructions on how to build proteins (and therefore much simpler), but proteins to almost all of the heavy lifting and for this reason heave to be almost infinitely more diverse. Because of my perceived difficulty of grading, I only grade pass/fail on these courses. Despite this, the pass/fail has always bothered me because students are clearly performing very differently during the course and I perhaps fail to convey these differences to the student's transcript. But I have no clue how I would go about the assessment of learning goals in this setting. I could give each student a protein to purify, the necessary reagents and equipment and one week's time. That would be an exam they will not forget! But wait: that's exactly what we do during the course except that the students have somebody to turn to if they need input and discussion.