Friday, 19 March 2021

2021/03/19 

Blogging has gone out of style for sometime. So it might be interesting to start back and think of something to write, or rather to speak about since I'm actually now using the voice dictation system on my Mac.

I have been trying to figure out a theme for my research and after discussion with my supervisor last weekend I decided to look at some thing called contextual learning. Simultaneously, I have been reading a book by Kazuo Ishiguro "Clara and the sun quote. Another book that I'm reading right now is about evolutionary psychology by an author called Douglas T Kenrick. 

Putting together the three things that are on my mind right now, I am thinking about what may be the difference between Face to Face learning sessions and online lectures. I am thinking along the fact that the subtle differences between face-to-face sessions and online lectures might include the lack of context - for example, some whispers from your neighbors and carefree conversations between an after lectures, and serendipitous talks with classmates during lunchtime. 

There is a connection between the novel I am reading and the book about evolution and psychology. The girl in the novel seems to change her personality According to the given context that she is in, for example when she's around friends, she is quite cold towards her AF. According to EvolutionHR in psychology this is quite normal and in fact we all have different personalities within us and the personalities that are suggested in the psychology book (called subselves) are as follows.

- team player (affiliation)

- go-getter (status)

- night watchman (safety)

- compulsive (avoiding illness)

- swinging single (acquiring mates)

- good spouse (retaining mates)

- parent (kin care) 


Friday, 23 August 2019

the ARCS model of Motivational design

According to John Keller’s ARCS Model of Motivational Design Theories, there are four steps for promoting and sustaining motivation in the learning process: Attention, Relevance, Confidence, Satisfaction (ARCS)(1) (2)


The ARCS Model of motivation was developed in response to a desire to find more effective ways of understanding the major influences on the motivation to learn, and for systematic ways of identifying and solving problems with learning motivation. (3)

The resulting model contains a four category synthesis of variables that encompasses most of the areas of research on human motivation, and a motivational design process that is compatible with typical instructional design models.




Attention

(1) Perceptual arousal – uses surprise or uncertainly to gain interest. Uses novel, surprising, incongruous, and uncertain events; or 
(2) Inquiry arousal – stimulates curiosity by posing challenging questions or problems to be solved.

Tactics for this can range from simple unexpected events (e.g. a loud whistle, an upside-down word in a visual) to mentally stimulating problems that engage a deeper level of curiosity, especially when presented at the beginning of a lesson.

Another element is variation, which is necessary to sustain attention. People like a certain amount of variety and they will lose interest if your teaching strategies, even the good ones, never change. 

Relevance

Even if curiosity is aroused, motivation is lost if the content has no perceived value to the learner. 
Relevance results from connecting the content of instruction to important goals of the learners, their past interests, and their learning styles. 

One traditional way to do this is to relate instructional content to the learners’ future job or academic requirements. Another, and often more effective approach is to use simulations, analogies, case studies, and examples related to the students' immediate and current interests and experiences. 

For example, secondary school children enjoy reading stories with themes of stigma, popularity, and isolation because these are important issues at that time of their lives. 

Confidence


This is accomplished by helping students establish positive expectancies for success. 

Often students have low confidence because they have very little understanding of what is expected of them. By making the objectives clear and providing examples of acceptable achievements, it is easier to build confidence. 

Another aspect of confidence is how one attributes the causes of one’s successes or failures. Being successful in one situation can improve one’s overall confidence if the person attributes success to personal effort or ability. 
If the student believes that success was due to external factors such as luck, lack of challenge, or decisions of other people, then confidence in one’s skills is not likely to increase. 

to sustain this motivation, the fourth condition of motivation is required,

Satisfaction

 It refers to positive feelings about one's accomplishments and learning experiences. It means that students receive recognition and evidence of success that support their intrinsic feelings of satisfaction and they believe they have been treated fairly.

Tangible extrinsic rewards can also produce satisfaction, and they can be either Integrating motivation substantive or symbolic. 

That is, they can consist of grades, privileges, promotions or such things as certificates, monogrammed school supplies, or other tokens of achievement. 

Opportunities to apply what one has learned coupled with personal recognition support intrinsic feelings of satisfaction. Finally, a sense of equity, or fairness, is important. 

Students must feel that the amount of work required by the course was appropriate, that there was internal consistency between objectives, content, and tests, and that there was no favoritism in grading. 


References
  1. Keller, J. M. (2009). Motivational design for learning and performance: The ARCS model approach. Springer Science & Business Media.
  2. Keller, John M. “Development and use of the ARCS model of instructional design.” Journal of instructional development 10, no. 3 (1987): 2-10.
  3. Keller, J.M. Journal of Instructional Development (1987) 10: 2. https://doi.org/10.1007/BF02905780
  4. https://app.nova.edu/toolbox/instructionalproducts/ITDE_8005/weeklys/2000-Keller-ARCSLessonPlanning.pdf

Friday, 16 August 2019

what makes a good assessment?

Not only in medical schools, but in any educational experience, assessment is the bane of students' existence but it is starting to be re-evaluated as a valuable tool in learning.

So if students hate assessments (they will always do, no matter how many candies you give them), how should we make it at least Effective and useful for the students?

A good assessment should have;
  • Generalizability
    • an "A" scored in an exam should mean, that the student has an exceptional ability in a certain field. Generalizability means that the score reflects the ability. for example, a student whom scored A in Anatomy will have more knowledge about anatomy in general, compared to another student whom scored a C. 
    • a good assessment with generalizability is one where we can conclude something from the results. for example a pass in Driving license exam suggests that the person is equipped with enough knowledge and skill to drive a car (hopefully)
  • valid (with evidence of validity) - does the test measure what it is supposed to measure?
    • content validity 
      • the assessment content should be one that reflects the learning outcomes and teaching strategies. 
      • some common mistakes made are;
        • content underrepresentation - too few data for assessment - e.g. to determine if a student will enter a university, a 5 minute interview was the only assessment. 
        • content irrelevant variance - irrelevant data was used for assessment e.g. anatomy - name the carpal bones, in latin.  
    • response process
      • is about the administration, management and implementation of the exam 
      • if the students say "we dont think it is fair to be tested this way" or "we dont know how to pass this exam", there is a need to reevaluate the validity of exam. 
    • internal structure
      • difficulty, distinguishing index
      • reliability 
      • standard variance 
    • relationship variables
      • this is how related the assessments are in terms of the abilities tested - for example, a clinical Mini-CEX exam score should correlate more with OSCE than a knowledge-based MCQ. 
      • hence an an assessment can be implied as valid if its score has a positive correlation with another exam score which tests the same abilities. 
    • consequences
      • the influence the assessment has on the stakeholders - i.e. the students, educators, and society. 
      • for example an assessment that is so difficult that it would discourage students, or drive them to insanity, is not a valid assessment. 
  • Reliability - degree to which an assessment tool produces stable and consistent results
    • test-retest reliability - does retaking the same test produce similar results?
    • parallel forms reliability - does asking the same construct in a different test produce the same score?
    • inter-rater reliability - do different judges score the same student differently?
    • internal consistency reliability - does asking the same construct in different question produce the same answer?
a more straight-forward explanation was done by Van der Vleuten (1996)
five criteria for determining the usefulness of a particular method of assessment:
  • reliability (the degree to which the measurement is accurate and reproducible), 
  • validity (whether the assessment measures what it claims to measure), 
  • impact on future learning and practice, 
  • acceptability to learners and faculty, 
  • costs (to the individual trainee, the institution, and society at large).
Van Der Vleuten CPM. The assessment of professional competence: developments, research and practical implications. Adv Health Sci Educ 1996;1:41-67

evaluation of a medical education intervention

How is a medical education intervention evaluated?

It is one tough taks to suggest an education reform for a medical school, to fork out money for structural change, and convince the reluctant faculty to change whatever seems to be working fine to make things better.

but there is another tough nut to crack - the evaluation of such interventions once it is done.

how could we evaluate the success of any medical educational intervention?

Educational evaluation is the systematic appraisal of the quality of teaching and learning- just as assessments do, Evaluation can have a formative role, identifying areas where teaching can be improved, or a summative role, judging the effectiveness of teaching.

Four general approaches to educational evaluation have been suggested by Wilkes and Bligh (1999) -

  • student oriented - Predominantly uses measurements of student performance (usually test results) as the principal indicator.
  • programme oriented - —Compares the performance of the course as a whole to its overall objectives and often involves descriptions of curriculum or teaching activities.
  • institution oriented - Usually carried out by external organisations and aimed at grading the quality of teaching for comparative purposes.
  • stakeholder oriented  - Takes into account the concerns and claims of those involved and affected by the course or programme of education including students, faculty, patients, and the governing body of medical education in the country. 
If, lets say, we had one year to evaluate the success of a project (intervention), what kind of data should we collect in order to approach evaluation from these above-mentioned four criteria? 
  • student oriented - we could get the examination results (of whatever the medical school is currently using) at the start of the year, and at the end of the year where the intervention has taken place, as well as asking students for self-assessments about their own learning. 
  • programme oriented - we could assess if the changed curriculum more appropriately achieve the outcomes that was meant to reach, and this assessment would be most relevant if there is a rubric for each of the outcomes set. (if there is no outcomes yet, perhaps it is an idea to make one at the start of the year)
  • institution oriented - request external organizations to grade the quality of teaching. if the country that the school exist in does not have the External organization for quality control, there are international bodies that do so -one example is the WFME (World Federation for Medical Education) is involved in accreditation of medical schools, and they can assess medical schools if the said medical school is fit to produce medical graduates. 
  • Stakeholder oriented - a survey of the perceived adequateness of medical education before and after the intervention could be undertaken, where the survey is directed to the stake-holders - students, faculty members, and patients (whom were possibly involved in the education process) and SPs (used in the OSCE intervention) 

references;
Wilkes M1, Bligh JEvaluating educational interventions;  1999 May 8;318(7193):1269-72.

Monday, 12 August 2019

work-based assessment tools

work-based assessment in medical education refers to the assessment of working practices based on what doctors actually do in the workplace, and is predominantly carried out in the workplace itself. (PMETB, 2007). In the Miller's framework for assessing clinical competence, workplace-based methods of assessment target the highest level of the pyramid and collect information about doctors’ performance in their everyday practice, the "Does" level of performance.  

it is important to recognize what kind of work-based assessment tools have been suggested and implemented in the world of medical education in the current state.

work-based assessment tools

Mini-Clinical Evaluation Exercise (mini-CEX)
  • students engage in authentic workplace-based (inpatient, outpatient, emergency department etc) patient encounters, while being observed by faculty members
  • clinical tasks are carried out by the students, such as taking a focused history, carrying out relevant physical examination, after which they are required to provide a summary of the encounter followed by suggestion of next steps (e.g. working diagnosis, management plan, further investigations)
  • verbal feedback is given from the assessor to the students
Clinical Encounter Cards (CEC)

  • similar to Mini-CEX but with more structured set of assessment components
  • these components include;
    • History taking
    • physical examination
    • professional behavior
    • technical skill
    • case presentation
    • problem formulation (diagnosis)
    • problem solving (management)
Direct Observation of Procedural Skills (DOPS)
  • more focused on the procedural skills evaluation in the workplace setting
  • emphasis is given to the specific and technical feedbacks given based on the direct observations, so as to improve the students' clinical skills.
Case-based discussion (CbD)
  • discussion based on the patient case records
  • focuses more of the clinical reasoning and problem solving skills of the student as the assessor ask detailed questions about the rationale behind decisions made in the authentic clinical practice.
Multi-source feedback/360 degree assessment
  • a systematic collection of performance data and feedback for an individual student, using multiple sources (such as peers, ward nurses, patients, faculty member) 
  • focuses more on the routine behavior rather than the performance and action during specific patient encounters. 
Clinical Work Sampling (CWS)

  • also considered a direct observation type assessment
  • components being tested are:
    • communication skills
    • Physical examination skills
    • diagnostic acumen
    • consultation skills
    • management skills
    • interpersonal behavior
    • continued learning skills
    • health advocacy skills 

Portfolio assessments
  • collection of the students' learning records including but not limited to actual patient encounters, patient case records, student formulated action plan and student reflection and learning.
  • portfolio can then be used as tools of feedback and assessment. 

Characteristics of good assessment tools


  • work-based assessment was created in hopes of providing a more authentic type of assessment that provides an opportunity for more relevant feedback. hence a good work-based assessment should be;
    • authentic - using real case scenarios and patients
    • an opportunity for giving narrative feedback
  • "assessment drives learning"- work-based assessment promotes intrinsic motivation more so than other assessments because of the high fidelity of the assessment - it isn't hard to believe that if a student scores well on work-based assessments with positive feedback, s/he is on a way to be a good doctor. Hence it should be a driving factor of intrinsic motivation
  • appropriate level of planning and implementation, as well as continued improvements by faculty member
  • provide appropriate kind of patient (content validity), amount of patient encounters (good representation of actual patient demographics) to ensure high validity and reliability
  • provide multiple different assessment modalities testing various components of student actions, skills and competence - different assessment tools have different strengths and weaknesses. 
  •  


Wednesday, 7 August 2019

improving medical education - education methods (27)

There are -I believe- endless list of ways to learn.

The success of teaching / learning is a difficult thing to measure, but something that can be influenced by many things, such as the planning, its implementation, its relevance to the social and community contexts, proficiency of lecturer, student engagement etc.

in this post, I attempt to highlight the age-old lecture method of education, and its strength and weaknesses.

strength and weaknesses of lecture-based education

Strengths
  • it is expected - most, if not all students are familiar with lectures. It is not a difficult skill to be able to listen, and learn. Hence it is probably an education tool that can be utilized with the least amount of resistance both from students or lecturers.
  • it can provide for the largest volume of students per time - possibly with the exception of e-learning modes of education, lecture-based education has a very large amount of cost-and-time effectiveness, if effectiveness can be measured by just delivery of study material
  • the content is less likely affected by environment factors - most lecture theater as a roof, walls, and doors. It is less likely cancelled due to heavy rain, or blackouts (lecturers can still present in darkness)
  • easily carried out - most medical schools have lecture halls. therefore it needs no additional cost, or additional staffing. furthermore, no additional equipments need to be purchased in order for lecture to be carried out. Other learning methods, for e.g. small group learning, needs a new set of facilities, like group discussion rooms and table / chairs. 
Weaknesses
  • limited by time - lecturers can only present a set amount of content in the stipulated time frame. Hence the volume of content is limited by the time with which the lecturer is available to conduct the lecture
  • It usually only transcend the lowest of the Miller's pyramid of competency, skill and knowledge. - it is difficult to communicate skills and competency through a lecture without demonstrations or role-plays. Usually, lectures only impart knowledge. 
  • largely Lecturer-dependent - some good lecturers manage to engage students, even inspire them to learning more on their own. however, the worst of the lecturers are (for the lack of bette word) a waste of time. 
  • probably the least engaging of the many modes of education - most students have spent years just listening to lectures. While they are most familiar with this, students will never be as engaged as PBLs, group discussions, or game-based education. Students frequently sleep during lectures, which reduces the learning efficacy to near zero. It won't be as easy to sleep during small group discussions. 

methods for student assessment and tutor evaluation

it is important to assess student learning achievements and tutor efficiency. 
in a way, assessment for students is an assessment of tutor efficiency since unless the curriculum is largely dependent upon student self-regulation and peer motivation, the role of tutor is paramount to successful learning. 

methods of student assessment is varied and extensively researched. it includes;
  • Traditional MCQ style questions - in which they could be used in a traditional way, but evaluated more extensively in finding out which portion of knowledge may be deficient, where the curriculum needs to put in more effort etc. (Item analysis).  a slightly better alternative is the EMQ (Extended Matching Questions) where there are more choice options, and less space for guessing. 
  • MEQ (Modified Essay Questions) which is a deciphering exercise of patient case scenario and a test on clinical diagnosis and decision making. this can be conducted to test the second level of the Miller's pyramid, the "Knows how.
  • OSCE (Objective Structured Clinical Examination) consisting of a clinical scenarios and SP (Standardized Patients) to emulate an authentic clinical setting. Students can be assessed on the "shows how" component of the Miller's pyramid of skills and competencies. 
  • Mini-CEX (Clinical Evaluation Exercise) which is a supervised learning Event which involves direct observation of a doctor-patient encounter by a trained tutor for formative assessment purposes. this examines the "shows how" and partially, "Does" portion of the Miller's pyramid. 
  • Portfolio assessment  which can be outlined by Collection, Reflection, Evaluation (by assessors) Defence (of evidence by the student) and assessment and feedback. the "Evidence" collected should be the evidence of the students' clinical encounters and the thought processes, and decisions made by the students. the Reflection portion of the Portfolio included, it can be used to assess whether the student possesses the qualities to become a reflective practitioner that can continuously reflect on their own actions. This assessment method is thought to test the topmost "Does" component of the Miller's pyramid. 
the tutor assessment is a difficult one to discuss, as these aren't as throughly discussed as student assessments. however some attempts can be made by;
  • student assessments - as mentioned above, the student performance indirectly reflected by the tutor excellence. 
  • student satisfaction survey - although it may be biased, and students may not recognize a good tutor, a satisfaction survey and objective questions like "is the lecture well understood?", "does the lecturer try to engage students?", "does s/he make use of multimedia learning tools?" can be used to assess tutor strengths and weaknesses. 
  • peer evaluation - in the form of commentary. It may be useful for tutors to assess each other, giving frank opinions in order to improve each other
  • self-assessment and evaluation - tutors are expected to be reflective practitioner themselves. However sessions for reflection may help facilitate this process of self assessment, reflection, and improvement. 




Monday, 29 July 2019

(2)

assessing the educational needs of a country to improve clinical education

when a country decides that graduates from a medical school possess the sufficient competencies to practice independently, it needs to gather sufficient amount of reliable data, from which inferences can be made and corrective actions can be taken.

it can be said that the educational needs of a country depends on the situations in which the country needs its medical graduate to function in.

The competency requirement for independent practice in a developing tropical country (knowledge about various tropical infectious diseases, nutritional deficiencies, and good idea about community preventive medicine to prevent common oral fecal route infectious diseases etc)) and independent practice in an urbanized city clinic (knowledge about mental health, non-communicable disease, and health screening for the over-nourished etc) is vastly different.

Thus, during the course of the medical education, the content of the curriculum and the content of the formative and summative assessment should match the needs of the local community. 

to give an example, it would be pointless to have a patient with Scurvy in the exit examination in an urban developed country. 
likewise, a medical student in a developing country should be assessed in their skills and competence to diagnose and manage common illnesses in that local area. 

therefore, one way to assess the educational needs of a country is to have a good understanding of the country's healthcare needs and deficiencies. 

another way to improve clinical education is to assess the graduates to find out just what specific areas are deficient in competencies. Is it the communication skills? understanding of community medicine? is it the lack in knowledge that is required to diagnose and manage the patient? or lack of clinical culture understanding?

to investigate into these deficiencies, a simple questionnaire-based research can be done, for example asking the graduates where they lack in competencies, and/or asking their colleagues where the graduates lack in skills and competencies. 

once the data of graduates' deficiencies are found, the rest is up to the school to set outcome goals to fill in the deficiencies while they're in the medical school. 


possible intervention activities to improve clinical education

about content - 

The examination should be based on demographic data of the country, and the common things that the graduates would face once they exit medical school, should be asked more frequently. 

"assessment drives learning" - once the assessments criteria have been set, the curriculum could be re-set and students themselves would have the motivation to face the new assessment criteria, which is more suited for practice in that particular country.

about context-

some contextual things that can change to make medical graduates more grounded and ready for practice are; 
to increase clinical exposure, and
assessments based on competencies. 

students should be assessed on the "Does" and "Shows how" level of skills and competencies (of the Miller's 4 levels of knowledge, skills and competencies) while they're exposed to the actual situation of practice in the local context (high authenticity Work-based assessments)

interventions-

some intervention activities that can be done to improve clinical education is to have frequent mini-CEX in the clinical context, followed by an immediate feedback session to investigate and correct any deficiencies in competence and skills of the medical students.

another intervention is the introduction of Portfolio system where each students would simulate what they would do when faced with each patient they encounter while observing in the clinical context, and record them in the form of Portfolio, discussing these clinical scenario with their mentors. 

one other intervention is Faculty development.
typically, medical school teachers are previous doctors whom have had clinical exposure and perhaps a successful physicians or surgeons, but people who had no training in teaching and learning.

for these doctors-turned-teachers, orientation programs must be placed to turn them into a medical educator.

A comprehensive faculty development program should be built upon (1) professional development (new faculty members should be oriented to the university and to their various faculty roles); (2) instructional development (all faculty members should have access to teaching-improvement workshops, peer coaching, mentoring, and/or consultations); (3) leadership development (academic programs depend upon effective leaders and well-designed curricula; these leaders should develop the skills of scholarship to effectively evaluate and advance medical education); (4) organizational development (empowering faculty members to excel in their roles as educators requires organizational policies and procedures that encourage and reward teaching and continual learning). 

https://europepmc.org/abstract/med/9580715