Friday, 23 August 2019

the ARCS model of Motivational design

According to John Keller’s ARCS Model of Motivational Design Theories, there are four steps for promoting and sustaining motivation in the learning process: Attention, Relevance, Confidence, Satisfaction (ARCS)(1) (2)


The ARCS Model of motivation was developed in response to a desire to find more effective ways of understanding the major influences on the motivation to learn, and for systematic ways of identifying and solving problems with learning motivation. (3)

The resulting model contains a four category synthesis of variables that encompasses most of the areas of research on human motivation, and a motivational design process that is compatible with typical instructional design models.




Attention

(1) Perceptual arousal – uses surprise or uncertainly to gain interest. Uses novel, surprising, incongruous, and uncertain events; or 
(2) Inquiry arousal – stimulates curiosity by posing challenging questions or problems to be solved.

Tactics for this can range from simple unexpected events (e.g. a loud whistle, an upside-down word in a visual) to mentally stimulating problems that engage a deeper level of curiosity, especially when presented at the beginning of a lesson.

Another element is variation, which is necessary to sustain attention. People like a certain amount of variety and they will lose interest if your teaching strategies, even the good ones, never change. 

Relevance

Even if curiosity is aroused, motivation is lost if the content has no perceived value to the learner. 
Relevance results from connecting the content of instruction to important goals of the learners, their past interests, and their learning styles. 

One traditional way to do this is to relate instructional content to the learners’ future job or academic requirements. Another, and often more effective approach is to use simulations, analogies, case studies, and examples related to the students' immediate and current interests and experiences. 

For example, secondary school children enjoy reading stories with themes of stigma, popularity, and isolation because these are important issues at that time of their lives. 

Confidence


This is accomplished by helping students establish positive expectancies for success. 

Often students have low confidence because they have very little understanding of what is expected of them. By making the objectives clear and providing examples of acceptable achievements, it is easier to build confidence. 

Another aspect of confidence is how one attributes the causes of one’s successes or failures. Being successful in one situation can improve one’s overall confidence if the person attributes success to personal effort or ability. 
If the student believes that success was due to external factors such as luck, lack of challenge, or decisions of other people, then confidence in one’s skills is not likely to increase. 

to sustain this motivation, the fourth condition of motivation is required,

Satisfaction

 It refers to positive feelings about one's accomplishments and learning experiences. It means that students receive recognition and evidence of success that support their intrinsic feelings of satisfaction and they believe they have been treated fairly.

Tangible extrinsic rewards can also produce satisfaction, and they can be either Integrating motivation substantive or symbolic. 

That is, they can consist of grades, privileges, promotions or such things as certificates, monogrammed school supplies, or other tokens of achievement. 

Opportunities to apply what one has learned coupled with personal recognition support intrinsic feelings of satisfaction. Finally, a sense of equity, or fairness, is important. 

Students must feel that the amount of work required by the course was appropriate, that there was internal consistency between objectives, content, and tests, and that there was no favoritism in grading. 


References
  1. Keller, J. M. (2009). Motivational design for learning and performance: The ARCS model approach. Springer Science & Business Media.
  2. Keller, John M. “Development and use of the ARCS model of instructional design.” Journal of instructional development 10, no. 3 (1987): 2-10.
  3. Keller, J.M. Journal of Instructional Development (1987) 10: 2. https://doi.org/10.1007/BF02905780
  4. https://app.nova.edu/toolbox/instructionalproducts/ITDE_8005/weeklys/2000-Keller-ARCSLessonPlanning.pdf

Friday, 16 August 2019

what makes a good assessment?

Not only in medical schools, but in any educational experience, assessment is the bane of students' existence but it is starting to be re-evaluated as a valuable tool in learning.

So if students hate assessments (they will always do, no matter how many candies you give them), how should we make it at least Effective and useful for the students?

A good assessment should have;
  • Generalizability
    • an "A" scored in an exam should mean, that the student has an exceptional ability in a certain field. Generalizability means that the score reflects the ability. for example, a student whom scored A in Anatomy will have more knowledge about anatomy in general, compared to another student whom scored a C. 
    • a good assessment with generalizability is one where we can conclude something from the results. for example a pass in Driving license exam suggests that the person is equipped with enough knowledge and skill to drive a car (hopefully)
  • valid (with evidence of validity) - does the test measure what it is supposed to measure?
    • content validity 
      • the assessment content should be one that reflects the learning outcomes and teaching strategies. 
      • some common mistakes made are;
        • content underrepresentation - too few data for assessment - e.g. to determine if a student will enter a university, a 5 minute interview was the only assessment. 
        • content irrelevant variance - irrelevant data was used for assessment e.g. anatomy - name the carpal bones, in latin.  
    • response process
      • is about the administration, management and implementation of the exam 
      • if the students say "we dont think it is fair to be tested this way" or "we dont know how to pass this exam", there is a need to reevaluate the validity of exam. 
    • internal structure
      • difficulty, distinguishing index
      • reliability 
      • standard variance 
    • relationship variables
      • this is how related the assessments are in terms of the abilities tested - for example, a clinical Mini-CEX exam score should correlate more with OSCE than a knowledge-based MCQ. 
      • hence an an assessment can be implied as valid if its score has a positive correlation with another exam score which tests the same abilities. 
    • consequences
      • the influence the assessment has on the stakeholders - i.e. the students, educators, and society. 
      • for example an assessment that is so difficult that it would discourage students, or drive them to insanity, is not a valid assessment. 
  • Reliability - degree to which an assessment tool produces stable and consistent results
    • test-retest reliability - does retaking the same test produce similar results?
    • parallel forms reliability - does asking the same construct in a different test produce the same score?
    • inter-rater reliability - do different judges score the same student differently?
    • internal consistency reliability - does asking the same construct in different question produce the same answer?
a more straight-forward explanation was done by Van der Vleuten (1996)
five criteria for determining the usefulness of a particular method of assessment:
  • reliability (the degree to which the measurement is accurate and reproducible), 
  • validity (whether the assessment measures what it claims to measure), 
  • impact on future learning and practice, 
  • acceptability to learners and faculty, 
  • costs (to the individual trainee, the institution, and society at large).
Van Der Vleuten CPM. The assessment of professional competence: developments, research and practical implications. Adv Health Sci Educ 1996;1:41-67

evaluation of a medical education intervention

How is a medical education intervention evaluated?

It is one tough taks to suggest an education reform for a medical school, to fork out money for structural change, and convince the reluctant faculty to change whatever seems to be working fine to make things better.

but there is another tough nut to crack - the evaluation of such interventions once it is done.

how could we evaluate the success of any medical educational intervention?

Educational evaluation is the systematic appraisal of the quality of teaching and learning- just as assessments do, Evaluation can have a formative role, identifying areas where teaching can be improved, or a summative role, judging the effectiveness of teaching.

Four general approaches to educational evaluation have been suggested by Wilkes and Bligh (1999) -

  • student oriented - Predominantly uses measurements of student performance (usually test results) as the principal indicator.
  • programme oriented - —Compares the performance of the course as a whole to its overall objectives and often involves descriptions of curriculum or teaching activities.
  • institution oriented - Usually carried out by external organisations and aimed at grading the quality of teaching for comparative purposes.
  • stakeholder oriented  - Takes into account the concerns and claims of those involved and affected by the course or programme of education including students, faculty, patients, and the governing body of medical education in the country. 
If, lets say, we had one year to evaluate the success of a project (intervention), what kind of data should we collect in order to approach evaluation from these above-mentioned four criteria? 
  • student oriented - we could get the examination results (of whatever the medical school is currently using) at the start of the year, and at the end of the year where the intervention has taken place, as well as asking students for self-assessments about their own learning. 
  • programme oriented - we could assess if the changed curriculum more appropriately achieve the outcomes that was meant to reach, and this assessment would be most relevant if there is a rubric for each of the outcomes set. (if there is no outcomes yet, perhaps it is an idea to make one at the start of the year)
  • institution oriented - request external organizations to grade the quality of teaching. if the country that the school exist in does not have the External organization for quality control, there are international bodies that do so -one example is the WFME (World Federation for Medical Education) is involved in accreditation of medical schools, and they can assess medical schools if the said medical school is fit to produce medical graduates. 
  • Stakeholder oriented - a survey of the perceived adequateness of medical education before and after the intervention could be undertaken, where the survey is directed to the stake-holders - students, faculty members, and patients (whom were possibly involved in the education process) and SPs (used in the OSCE intervention) 

references;
Wilkes M1, Bligh JEvaluating educational interventions;  1999 May 8;318(7193):1269-72.

Monday, 12 August 2019

work-based assessment tools

work-based assessment in medical education refers to the assessment of working practices based on what doctors actually do in the workplace, and is predominantly carried out in the workplace itself. (PMETB, 2007). In the Miller's framework for assessing clinical competence, workplace-based methods of assessment target the highest level of the pyramid and collect information about doctors’ performance in their everyday practice, the "Does" level of performance.  

it is important to recognize what kind of work-based assessment tools have been suggested and implemented in the world of medical education in the current state.

work-based assessment tools

Mini-Clinical Evaluation Exercise (mini-CEX)
  • students engage in authentic workplace-based (inpatient, outpatient, emergency department etc) patient encounters, while being observed by faculty members
  • clinical tasks are carried out by the students, such as taking a focused history, carrying out relevant physical examination, after which they are required to provide a summary of the encounter followed by suggestion of next steps (e.g. working diagnosis, management plan, further investigations)
  • verbal feedback is given from the assessor to the students
Clinical Encounter Cards (CEC)

  • similar to Mini-CEX but with more structured set of assessment components
  • these components include;
    • History taking
    • physical examination
    • professional behavior
    • technical skill
    • case presentation
    • problem formulation (diagnosis)
    • problem solving (management)
Direct Observation of Procedural Skills (DOPS)
  • more focused on the procedural skills evaluation in the workplace setting
  • emphasis is given to the specific and technical feedbacks given based on the direct observations, so as to improve the students' clinical skills.
Case-based discussion (CbD)
  • discussion based on the patient case records
  • focuses more of the clinical reasoning and problem solving skills of the student as the assessor ask detailed questions about the rationale behind decisions made in the authentic clinical practice.
Multi-source feedback/360 degree assessment
  • a systematic collection of performance data and feedback for an individual student, using multiple sources (such as peers, ward nurses, patients, faculty member) 
  • focuses more on the routine behavior rather than the performance and action during specific patient encounters. 
Clinical Work Sampling (CWS)

  • also considered a direct observation type assessment
  • components being tested are:
    • communication skills
    • Physical examination skills
    • diagnostic acumen
    • consultation skills
    • management skills
    • interpersonal behavior
    • continued learning skills
    • health advocacy skills 

Portfolio assessments
  • collection of the students' learning records including but not limited to actual patient encounters, patient case records, student formulated action plan and student reflection and learning.
  • portfolio can then be used as tools of feedback and assessment. 

Characteristics of good assessment tools


  • work-based assessment was created in hopes of providing a more authentic type of assessment that provides an opportunity for more relevant feedback. hence a good work-based assessment should be;
    • authentic - using real case scenarios and patients
    • an opportunity for giving narrative feedback
  • "assessment drives learning"- work-based assessment promotes intrinsic motivation more so than other assessments because of the high fidelity of the assessment - it isn't hard to believe that if a student scores well on work-based assessments with positive feedback, s/he is on a way to be a good doctor. Hence it should be a driving factor of intrinsic motivation
  • appropriate level of planning and implementation, as well as continued improvements by faculty member
  • provide appropriate kind of patient (content validity), amount of patient encounters (good representation of actual patient demographics) to ensure high validity and reliability
  • provide multiple different assessment modalities testing various components of student actions, skills and competence - different assessment tools have different strengths and weaknesses. 
  •  


Wednesday, 7 August 2019

improving medical education - education methods (27)

There are -I believe- endless list of ways to learn.

The success of teaching / learning is a difficult thing to measure, but something that can be influenced by many things, such as the planning, its implementation, its relevance to the social and community contexts, proficiency of lecturer, student engagement etc.

in this post, I attempt to highlight the age-old lecture method of education, and its strength and weaknesses.

strength and weaknesses of lecture-based education

Strengths
  • it is expected - most, if not all students are familiar with lectures. It is not a difficult skill to be able to listen, and learn. Hence it is probably an education tool that can be utilized with the least amount of resistance both from students or lecturers.
  • it can provide for the largest volume of students per time - possibly with the exception of e-learning modes of education, lecture-based education has a very large amount of cost-and-time effectiveness, if effectiveness can be measured by just delivery of study material
  • the content is less likely affected by environment factors - most lecture theater as a roof, walls, and doors. It is less likely cancelled due to heavy rain, or blackouts (lecturers can still present in darkness)
  • easily carried out - most medical schools have lecture halls. therefore it needs no additional cost, or additional staffing. furthermore, no additional equipments need to be purchased in order for lecture to be carried out. Other learning methods, for e.g. small group learning, needs a new set of facilities, like group discussion rooms and table / chairs. 
Weaknesses
  • limited by time - lecturers can only present a set amount of content in the stipulated time frame. Hence the volume of content is limited by the time with which the lecturer is available to conduct the lecture
  • It usually only transcend the lowest of the Miller's pyramid of competency, skill and knowledge. - it is difficult to communicate skills and competency through a lecture without demonstrations or role-plays. Usually, lectures only impart knowledge. 
  • largely Lecturer-dependent - some good lecturers manage to engage students, even inspire them to learning more on their own. however, the worst of the lecturers are (for the lack of bette word) a waste of time. 
  • probably the least engaging of the many modes of education - most students have spent years just listening to lectures. While they are most familiar with this, students will never be as engaged as PBLs, group discussions, or game-based education. Students frequently sleep during lectures, which reduces the learning efficacy to near zero. It won't be as easy to sleep during small group discussions. 

methods for student assessment and tutor evaluation

it is important to assess student learning achievements and tutor efficiency. 
in a way, assessment for students is an assessment of tutor efficiency since unless the curriculum is largely dependent upon student self-regulation and peer motivation, the role of tutor is paramount to successful learning. 

methods of student assessment is varied and extensively researched. it includes;
  • Traditional MCQ style questions - in which they could be used in a traditional way, but evaluated more extensively in finding out which portion of knowledge may be deficient, where the curriculum needs to put in more effort etc. (Item analysis).  a slightly better alternative is the EMQ (Extended Matching Questions) where there are more choice options, and less space for guessing. 
  • MEQ (Modified Essay Questions) which is a deciphering exercise of patient case scenario and a test on clinical diagnosis and decision making. this can be conducted to test the second level of the Miller's pyramid, the "Knows how.
  • OSCE (Objective Structured Clinical Examination) consisting of a clinical scenarios and SP (Standardized Patients) to emulate an authentic clinical setting. Students can be assessed on the "shows how" component of the Miller's pyramid of skills and competencies. 
  • Mini-CEX (Clinical Evaluation Exercise) which is a supervised learning Event which involves direct observation of a doctor-patient encounter by a trained tutor for formative assessment purposes. this examines the "shows how" and partially, "Does" portion of the Miller's pyramid. 
  • Portfolio assessment  which can be outlined by Collection, Reflection, Evaluation (by assessors) Defence (of evidence by the student) and assessment and feedback. the "Evidence" collected should be the evidence of the students' clinical encounters and the thought processes, and decisions made by the students. the Reflection portion of the Portfolio included, it can be used to assess whether the student possesses the qualities to become a reflective practitioner that can continuously reflect on their own actions. This assessment method is thought to test the topmost "Does" component of the Miller's pyramid. 
the tutor assessment is a difficult one to discuss, as these aren't as throughly discussed as student assessments. however some attempts can be made by;
  • student assessments - as mentioned above, the student performance indirectly reflected by the tutor excellence. 
  • student satisfaction survey - although it may be biased, and students may not recognize a good tutor, a satisfaction survey and objective questions like "is the lecture well understood?", "does the lecturer try to engage students?", "does s/he make use of multimedia learning tools?" can be used to assess tutor strengths and weaknesses. 
  • peer evaluation - in the form of commentary. It may be useful for tutors to assess each other, giving frank opinions in order to improve each other
  • self-assessment and evaluation - tutors are expected to be reflective practitioner themselves. However sessions for reflection may help facilitate this process of self assessment, reflection, and improvement. 




Monday, 29 July 2019

(2)

assessing the educational needs of a country to improve clinical education

when a country decides that graduates from a medical school possess the sufficient competencies to practice independently, it needs to gather sufficient amount of reliable data, from which inferences can be made and corrective actions can be taken.

it can be said that the educational needs of a country depends on the situations in which the country needs its medical graduate to function in.

The competency requirement for independent practice in a developing tropical country (knowledge about various tropical infectious diseases, nutritional deficiencies, and good idea about community preventive medicine to prevent common oral fecal route infectious diseases etc)) and independent practice in an urbanized city clinic (knowledge about mental health, non-communicable disease, and health screening for the over-nourished etc) is vastly different.

Thus, during the course of the medical education, the content of the curriculum and the content of the formative and summative assessment should match the needs of the local community. 

to give an example, it would be pointless to have a patient with Scurvy in the exit examination in an urban developed country. 
likewise, a medical student in a developing country should be assessed in their skills and competence to diagnose and manage common illnesses in that local area. 

therefore, one way to assess the educational needs of a country is to have a good understanding of the country's healthcare needs and deficiencies. 

another way to improve clinical education is to assess the graduates to find out just what specific areas are deficient in competencies. Is it the communication skills? understanding of community medicine? is it the lack in knowledge that is required to diagnose and manage the patient? or lack of clinical culture understanding?

to investigate into these deficiencies, a simple questionnaire-based research can be done, for example asking the graduates where they lack in competencies, and/or asking their colleagues where the graduates lack in skills and competencies. 

once the data of graduates' deficiencies are found, the rest is up to the school to set outcome goals to fill in the deficiencies while they're in the medical school. 


possible intervention activities to improve clinical education

about content - 

The examination should be based on demographic data of the country, and the common things that the graduates would face once they exit medical school, should be asked more frequently. 

"assessment drives learning" - once the assessments criteria have been set, the curriculum could be re-set and students themselves would have the motivation to face the new assessment criteria, which is more suited for practice in that particular country.

about context-

some contextual things that can change to make medical graduates more grounded and ready for practice are; 
to increase clinical exposure, and
assessments based on competencies. 

students should be assessed on the "Does" and "Shows how" level of skills and competencies (of the Miller's 4 levels of knowledge, skills and competencies) while they're exposed to the actual situation of practice in the local context (high authenticity Work-based assessments)

interventions-

some intervention activities that can be done to improve clinical education is to have frequent mini-CEX in the clinical context, followed by an immediate feedback session to investigate and correct any deficiencies in competence and skills of the medical students.

another intervention is the introduction of Portfolio system where each students would simulate what they would do when faced with each patient they encounter while observing in the clinical context, and record them in the form of Portfolio, discussing these clinical scenario with their mentors. 

one other intervention is Faculty development.
typically, medical school teachers are previous doctors whom have had clinical exposure and perhaps a successful physicians or surgeons, but people who had no training in teaching and learning.

for these doctors-turned-teachers, orientation programs must be placed to turn them into a medical educator.

A comprehensive faculty development program should be built upon (1) professional development (new faculty members should be oriented to the university and to their various faculty roles); (2) instructional development (all faculty members should have access to teaching-improvement workshops, peer coaching, mentoring, and/or consultations); (3) leadership development (academic programs depend upon effective leaders and well-designed curricula; these leaders should develop the skills of scholarship to effectively evaluate and advance medical education); (4) organizational development (empowering faculty members to excel in their roles as educators requires organizational policies and procedures that encourage and reward teaching and continual learning). 

https://europepmc.org/abstract/med/9580715 

improving clinical education - how can we increase clinical competencies in rural areas? (1) H26

one of the main missions of medical school is to produce a healthcare professional that is competent enough to practice in the context of where it is located, in the social background where the medical school exists in the world.


assessing the educational needs of a country to improve clinical education

when a country decides that graduates from a medical school possess the sufficient competencies to practice independently, it needs to gather sufficient amount of reliable data, from which inferences can be made and corrective actions can be taken.

it can be said that the educational needs of a country depends on the situations in which the country needs its medical graduate to function in.

The competency requirement for independent practice in a developing tropical country (knowledge about various tropical infectious diseases, nutritional deficiencies, and good idea about community preventive medicine to prevent common oral fecal route infectious diseases etc)) and independent practice in an urbanized city clinic (knowledge about mental health, non-communicable disease, and health screening for the over-nourished etc) is vastly different.

Thus, during the course of the medical education, the content of the curriculum and the content of the formative and summative assessment should match the needs of the local community. 

to give an example, it would be pointless to have a patient with Scurvy in the exit examination in an urban developed country. 
likewise, a medical student in a developing country should be assessed in their skills and competence to diagnose and manage common illnesses in that local area. 

therefore, one way to assess the educational needs of a country is to have a good understanding of the country's healthcare needs and deficiencies. 

another way to improve clinical education is to assess the graduates to find out just what specific areas are deficient in competencies. Is it the communication skills? understanding of community medicine? is it the lack in knowledge that is required to diagnose and manage the patient? or lack of clinical culture understanding?

to investigate into these deficiencies, a simple questionnaire-based research can be done, for example asking the graduates where they lack in competencies, and/or asking their colleagues where the graduates lack in skills and competencies. 

once the data of graduates' deficiencies are found, the rest is up to the school to set outcome goals to fill in the deficiencies while they're in the medical school. 

possible intervention activities to improve clinical education

about content - 

The examination should be based on demographic data of the country, and the common things that the graduates would face once they exit medical school, should be asked more frequently.
the assessment criteria should be clearly set, as outcomes for which all learning should make as foundations for the mapping and construct of curriculum. 

"assessment drives learning" - once the assessments criteria have been set, the curriculum could be re-set and students themselves would have the motivation to face the new assessment criteria, which is more suited for practice in that particular country.

about context-

some contextual things that can change to make medical graduates more grounded and ready for practice are; 
to increase clinical exposure, and
assessments based on competencies. 

students should be assessed on the "Does" and "Shows how" level of skills and competencies (of the Miller's 4 levels of knowledge, skills and competencies) while they're exposed to the actual situation of practice in the local context (high authenticity) 

interventions-

some intervention activities that can be done to improve clinical education is to have frequent mini-CEX in the clinical context, followed by an immediate feedback session to investigate and correct any deficiencies in competence and skills of the medical students.

another intervention is the introduction of Portfolio system where each students would simulate what they would do when faced with each patient they encounter while observing in the clinical context, and record them in the form of Portfolio, discussing these clinical scenario with their mentors. 


Tuesday, 2 July 2019

課題中心インストラクション TCI (1)

(原文をはこちら)

これらはhttps://www.amazon.co.jp/Instructional-Design-Theories-Models-Learner-Centered-Education/dp/1138012939 から抜粋された文を自分なりに翻訳し、自分自身の理解に利用しているものです。


TCIは、課題中心の学習方法であり、5つの主要要素(Merrill、2002b、2009)の使用を規定しています:学習タスク・アクティベーション・デモンストレーション/モデリング・アプリケーション・およびインテグレーション・エクスプロレーション(Francom&Gardner、2014)。
TCIの目標は、効果的・効率的な学習と共に、現実的な文脈への応用と知識の伝達を重視する傾向があります(Francom&Gardner、2013)。
対照的に、問題ベース学習(PBL)の目標は、柔軟な知識、深い理解、問題解決スキル、自主学習スキル、効果的コラボレーション、そして自主的動機付け開発により重点を置く傾向があります(Barrows、1996; Jonassen、2000)。
TCIは、タスクパフォーマンスを支援するために時間の経過とともに消えていく足がかりScaffolding(Masters&Yelland、2013)を含む、「純粋な」問題ベース学習に必ずしも存在しない、学習に関するいくつかの重要なやり方を追加します。
すなわち、PBLに問題点(個々の学びに差が有ったり、時間のロスがある可能性が出てくる)がある事は大分前から指摘されて来た背景があり、それらを 乗り越えるためにTCIが開発されて来たのですね。

--------------------
私自身、PBLで学び、その効果と至らない部分には結構身をもって感じた事は有ります。
例えば、グループ毎に学習課題が違い、自分のグループだけ低レベルな事をやっているんじゃないかという不安、他のグループのFacilitatorがもっと優秀じゃないのかといった邪推、個々に役割分担する事で生じる学びの不公平さ。
学習する中で、このような思いは学生から指導者側にも伝えてはいたのですね。しかし納得できる・安心できる答えは中々出てこない。
自分の中では、その不安や不公平さは社会に往々としてあるもので有り、その気持ちに向き合うことも含めて学びだと思って来ました。

これらの答えとして発達しているのが、どうやらTCIの様です。
今の所、はっきりとは判らない部分が多いのですが、それは読み進めることでハッキリとするかも知れません。
------------------

TCL Task Centered Instruction (1)

(for japanese version, please press here)

these texts are excerpts from this book, which I have highlighted, and inserted comments in order for myself to understand it further. 

https://www.amazon.co.jp/Instructional-Design-Theories-Models-Learner-Centered-Education/dp/1138012939


TCI is a task-centered approach to learning that prescribes the use of five main elements (Merrill, 2002b, 2009): learning tasks, activation, demonstration/ modeling, application, and integration/exploration (Francom & Gardner, 2014). 
TCI goals tend to value application and transfer of knowledge to realistic contexts as well as effective and efficient learning (Francom & Gardner, 2013). 
By contrast, problem-based learning goals tend to be more concerned with developing flexible knowledge, deep understanding, problem solving skills, self-directed learning skills, effective collaboration, and self-directed motivation (Barrows, 1996; Jonassen, 2000). 
TCI adds on several important prescriptions for learning that are not necessarily present in “pure” problem-based learning, including scaffolding (Masters & Yelland, 2002) that is faded out over time (Francom & Gardner, 2013) to help with task performance. 
there were increasing debates about new learning methods like the PBL, that the learning speed and content coverage is relatively slow. (Reigeluth, 2012; Spector, 2004)

this called for developtment of TCI.

In summary, TCI is a learning technique that is similar to that of PBL, that aims to overcome the shortcomings (unstable outcomes depending on the student, inefficiency of reaching target skill acquisition) by providing prompts and hints along the way. 

-----------------

my comments so far - I am quite familiar with the idea of PBL, and fairly comfortable with the notion that PBL isn't the most cost-effective or consistent of the learning methods. 
as students, we were always nervous when we imagined our peers in different groups having more high-profile facilitators and therefore better hands-on teaching (mistakenly so), also we were anxious to know that our peers learnt different things, according to where the discussion went. 

I still don't have a very clear understanding of TCI and its methodology. I shall continue reading. 

-----------------

my opinion on teacher-based traditional lectures


Traditional lectures are what people expect from education. Typically, it involves a group of student listening to a single lecturer.

what I feel about traditional methods are;

  • its not entirely bad
  • mostly boring but this can be remedied fairly easily with effort
  • should still be practiced to an extent, even in the most futuristic schools. 
what I think should be introduced while conducting traditional methods to maximize effect -
  • the concept of Marketing - we need to sell the fascination and passion to the students. 
the very basic of marketing is the AIDA concept - that people go through the stages of Attention, Interest, Desire, and Action in order to buy goods. 

I feel the process is similar when it comes to learning. I'm not talking about the exam-oriented learning which comes from the study-or-die mentality, but the more authentic kind which is significantly less painful and leads to lifelong learning. 

Learning must come from within, but the difficult part - i guess, is to light that starting fire within. Lecturers can help with that because they are supposed to be an exploding ball of passion flame. 

I think most lecturers start with the notion that students are already interested in the subject. Or, they think "only the worthy should remain (stay awake)" - or that the subject is too narrow and only themselves might even consider to be interesting ... which is all not the best way to approach the didactic method - in my opinion, lecturers' most important aim in lecturing is to light that fire, through gaining attention, sparking interest, giving students desire to learn, and providing material for students to act upon. 

this idea sparked in me when I was reading a Quora article about marketing techniques about making catch copies that sell. The world of education has a lot to learn from the world of marketing, due to the fact that marketing is directly associated with profit, and therefore more robustly thought about, using more money and resource. A smart thing to do, is to borrow those concepts and apply them to education. 

機械翻訳を伴う翻訳活動における気付き

現在、教育学の書籍を翻訳中なのですが、その翻訳作業に使っているオンラインソフトウェアがとても優秀で感動しています。

https://www.matecat.com/

このオンラインソフトウェアの強みは沢山あるのですが、ちょっと箇条書きにして行ってみたいと思います。

  • 無料
  • ダウンロード不要
  • 軽い
  • 不都合が少ない
  • 機械翻訳が優秀
  • コラボ可能&簡単
  • 検索可能&修正が容易
また、このような機械翻訳サポートのあるソフトウェアを使う時、心掛けると良い所を羅列して行きます。
  • 単語の選択は一回決めたらブレないようにする
  • ちょっと詰まったらコメント機能を使い、あとで戻ってくる
  • 訳し方がわからない部分は敢えて訳さず、章を終えてから戻ってくる
  • 英文部分をクリックして即辞書検索出来るようにコンピュータ上コンフィグすると時間短縮になる。
  • Technical termがわかりやすい様に、検索できる様にする
  • Technical termを決めたら、すぐGlossaryに登録
  • 進展を観測するには"to do" の文字数(左下)を見ると良いかも


翻訳記録 Green book 4 第3章

今日はTask-based learning第3章に取り掛かります。

第3章は65−87pgですが、合計22pg、一週間で仕上げるとして、一日4−5pgやれば良いことになりますね。

今日は69pgまで。5pgですね。

最初の部分は定義づけ等の部分ですが、この部分は章全体を訳してから戻ってきた方が良いでしょう。まずは文法を直しながら、疑問点をコメントに残し、翻訳を進めて行きます。

明日から本格的な部分が始まりますね。学びが有る部分です。

実はこれまで14章と15章を訳してきてわかった事があるのですが、それはこの投稿でまとめたいと思います。

Monday, 1 July 2019

このブログについて


このブログは、医学教育において私自身が様々な方法で・例えば医学書の翻訳、文献の検索、Quoraから読んだ回答などから学んだ事を記録して行く為に作りました。

このブログは出来るだけ多くの指導者(と言うことは即ち、我々全員ですね!)の役に立てるように、概念を一般的なものに当てはめられるように考えていきます。

何かここで有用な事を見つけられたら良いと願ってます・もしくはせめて楽しんでいって下さい!☺️


What is this blog about?

This blog serves to record what I have learnt about Medical Education, through various ways such as my part-time job as a medical book translator (English -> Japanese).

I would like to try and keep the concepts discussed in this blog general, or make it general - as I hope many teachers (which means, every single one of us!) can benefit from this.

I hope you'll find something useful here - or at least enjoy yourself! :)