ABOUT US

Our development agency is committed to providing you the best service.

OUR TEAM

The awesome people behind our brand ... and their life motto.

  • Neila Jovan

    Head Hunter

    I long for the raised voice, the howl of rage or love.

  • Mathew McNalis

    Marketing CEO

    Contented with little, yet wishing for much more.

  • Michael Duo

    Developer

    If anything is worth doing, it's worth overdoing.

OUR SKILLS

We pride ourselves with strong, flexible and top notch skills.

Marketing

Development 90%
Design 80%
Marketing 70%

Websites

Development 90%
Design 80%
Marketing 70%

PR

Development 90%
Design 80%
Marketing 70%

ACHIEVEMENTS

We help our clients integrate, analyze, and use their data to improve their business.

150

GREAT PROJECTS

300

HAPPY CLIENTS

650

COFFEES DRUNK

1568

FACEBOOK LIKES

STRATEGY & CREATIVITY

Phasellus iaculis dolor nec urna nullam. Vivamus mattis blandit porttitor nullam.

PORTFOLIO

We pride ourselves on bringing a fresh perspective and effective marketing to each project.

Tampilkan postingan dengan label Language Testing. Tampilkan semua postingan
Tampilkan postingan dengan label Language Testing. Tampilkan semua postingan
  • Grading and Student Evaluation

    Base grades on student achievement, and achievement only. Grades should represent the extent to which the intended learning outcomes were achieved by students. They should not be contaminated by student effort, tardiness, misbehavior, and other extraneous factors. . . . If they are permitted to become part of the grade, the meaning of the grade as an indicator of achievement is lost.
    Gronlund (1998)

    Guidelines for grading and evaluation
    1. Develop an informed, comprehensive personal philosophy of grading that is consistent with your philosophy of teaching and evaluation.
    2. Ascertain an institution's philosophy of grading, and, unless otherwise negotiated, conform to that philosophy (so that you are not out of step with others).
    3. Design tests that conform to appropriate institutional and cultural expectations of the difficulty that students should experience.
    4. Select appropriate criteria for grading and their relative weighting in calculating grades.
    5. Communicate criteria for grading to students at the beginning of the course and at subsequent grading periods (mid-term, final).
    6. Triangulate letter grade evaluations with alternatives that are more formative and that give more washback.
    When you assign a letter grade to a student, that letter should be symbolic of your approach to teaching. If you believe that a grade should recognize only objectively scored performance on a final exam, it may indicate that your approach to teaching rewards end products only, not process. If you base some portion of a final grade on improvement, behavior, effort, motivation, and/or punctuality, it may say that your philosophy of teaching values those affective elements.

    You might be one of those teachers who feel that grades are a necessary nuisance and that substantive evaluation takes place through the daily work of optimizing washback in your classroom.

    Download full materi tentang Grading and Student Evaluation untuk presentasi.
    Grading and Student Evaluation

    Further reading :
    Brown, D. (2004). Language Assessment: Principles and Classroom Practices. New York: Pearson Longman.
  • Assessing Writing

    Before looking at specific tasks, we must scrutinize the different genres of written language (so that context and purpose are clear), types of writing (so that stages of the development of writing ability are accounted for), and micro- and macroskills of writing (so that objectives can be pinpointed precisely).

    Genres of Written Language
    1. Academic writing
    Papers and general subject reports
    Essays, compositions
    Academically focused journals
    Short-answer test responses
    Technical reports (e.g., lab reports)
    Theses, dissertations
    2. Job-related writing
    Messages (e.g., phone messages)
    Letters/emails
    Memos (e.g., interoffice)
    Reports (e.g., job evaluations, project reports)
    Schedules, labels, signs
    Advertisements, announcements
    Manuals
    3. Personal Writing
    Letters, emails, greeting cards, invitations
    Messages, notes
    Calendar entries, shopping lists, reminders
    Financial documents (e.g., checks, tax forms, loan applications)
    Forms, questionnaires, medical reports, immigration documents
    Diaries, personal journal
    Fiction (e.g., short stories, poetry)
    Types of writing performance
    1. Imitative
    This category includes the ability to spell correctly and to perceive phoneme-grapheme correspondences in the English spelling system. It is a level at which learners are trying to master the mechanics of writing. At this stage, form is the primary if not exclusive focus, while context and meaning are of secondary concern.
    2. Intensive (controlled)
    Skills in producing appropriate vocabulary within a context, collocations and idioms, and correct grammatical features up to the length of sentence.
    3. Responsive
    Require learners to perform at a limited discourse level, connecting sentences into a paragraph and creating a logically connected sequence of two or three paragraphs.
    4. Extensive
    Implies successful management of all the processes and strategies of writing for all purposes, up to the length of an essay, a term paper, a major research project report, or even a thesis.

    Download full materi tentang Assessing Writing untuk presentasi.
    Assessing Writing

    Further reading :
    Brown, D. (2004). Language Assessment: Principles and Classroom Practices. New York: Pearson Longman.
  • Assessing Reading

    Being able to reading is being able to
    understand, use and reflect on written texts,
    in order to:
    achieve…
    one’s goals,
    to develop…
    one’s knowledge and potential
    and to participate
    in society.
    John H.A.L. de Jong

    Types (Genres) of Reading

    1. Academic reading
    • General interest articles (in magazines, newspapers, etc.)
    • Technical reports (e.g., lab reports), professional journal articles
    • Reference material (dictionaries, etc.)
    • Textbooks, theses
    • Essays, papers
    • Test directions
    • Editorials and opinion writing
    2. Job-related reading
    • Messages (e.g., phone messages)
    • Letters/emails
    • Memos (e.g., interoffice)
    • Reports (e.g., job evaluations, project reports)
    • Schedules, labels, signs, announcements
    • Forms, applications, questionnaires
    • Financial documents (bills, invoices, etc.)
    • Directories (telephone, office, etc.)
    • Manuals, directions
    3. Personal reading
    • Newspapers and magazines
    • Letters, emails, greeting cards, invitations
    • Messages, notes, lists
    • Schedules (train, bus, plane, etc.)
    • Recipes, menus, maps, calendars
    • Advertisements (commercials, want ads)
    • Novels, short stories, jokes, drama, poetry
    • Financial documents (e.g., checks, tax forms, loan applications)
    • Forms, questionnaires, medical reports, immigration documents
    • Comic strips, cartoons
    The genre of a text enable readers to apply certain schemata that will assist them in extracting appropriate meaning. For example, readers know that a text is a recipe, they will expect a certain arrangement of information (ingredients) and will know to search for a sequential order of directions. Efficient readers also have to know what their purpose is if reading a text, the strategies for accomplishing that purpose and how to retain the information.

    The content validity of an assessment procedure is largely established through the genre of a text. For example, if learners in a program of English for tourism have been learning how to deal with customers needing to arrange bus tours, then assessment of their ability should include guidebooks, maps, transportation schedules, calendars, and other relevant texts.

    As usual, anda bisa mendownload materi mengenai Assessing Reading dalam format PowerPoint
    Download :
    Assessing Reading
    Assessing Reading with charts and pictures

    Further reading :
    Brown, D. (2004). Language Assessment: Principles and Classroom Practices. New York: Pearson Longman.
  • Assessing Speaking

    Basic Types of Speaking

    1. Imitative
    The ability to simply parrot back(imitate) a word or phrase or possibly a sentence. The criterion included in this performance is phonetic level of oral production, a number of prosodic, lexical and grammatical properties of language.

    2. Intensive
    The production of short stretches of oral language designed to demonstrate competence in a narrow band of grammatical, phrasal, lexical or phonological relationships (such as prosodic elements - intonation, stress, rhythm, juncture). Examples of intensive assessment tasks include directed response tasks, reading aloud, sentence and dialogue completion; limited picture-cued tasks including simple sequences; and translation up to the simple sentence level.

    3. Responsive
    This assessment tasks include interaction and test comprehension but at the somewhat limited level of very short conversations, standard greetings and small talk, simple requests and comments, and the like.
    It's almost a spoken prompt (in order to preserve authenticity), with perhaps only one or two follow-up questions or retorts. For example:

    You : Hey, Go, how's it going?
    Me : Not bad, and yourself?
    You : I'm good.
    Me : Cool. Okay, gotta go.
    4. Interactive
    The difference between responsive and interactive speaking is in the length and complexity of the interaction, which sometimes includes multiple exchanges and/or multiple participants.

    5. Extensive (monologue)
    This tasks include speeches, oral presentations, and story-telling, during which the opportunity for oral interaction from listeners is either highly limited (perhaps to nonverbal responses) or ruled out altogether.

    Micro and Macroskills of Speaking
    The microskills refer to producing the smaller chunks of language such as phonemes, morphemes, words collocations, and phrasal units.

    The macroskills imply the speaker’s focus on the larger elements: fluency, discourse, function, style, cohesion, nonverbal communication, and strategic options.

    Oke, sampai disini saja pengenalan materi tentang Assessing Speaking. Setelah mengenal dasar-dasar dari Assessing Speaking, materi selanjutnya adalah mengenai desain test untuk setiap tipe test yang telah disebutkan sebelumnya. Anda bisa mendownload selengkapnya materi berikut ini dalam format PowerPoint. Dalam PowerPoint tersebut, mencakup beberapa materi tentang Assessing listening, seperti Basic Types of Speaking, Micro and Macroskills dan Designing Assessment Tasks for Each Types of Speaking.

    Download :
    Assessing Speaking with Pictures
    Assessing Speaking without Pictures

    Further reading :
    Brown, D. (2004). Language Assessment: Principles and Classroom Practices. New York: Pearson Longman.
  • Assessing Listening

    In earlier articles, a number of foundational principles of language assessment were introduced. Concepts like practically, reliability, validity, authenticity, washback, direct and indirect, and formative and summative assessment has been well explained here.

    Now, we will shift away from those principles to the classroom assessment of listening.

    Observing the Performance of the Four Skills
    The two interacting concepts;
    1. Performance
    2. Observing
    When you propose to assess someone's ability in one or a combination of the four skills(which is listening, speaking, reading and writing), you assess that person's competence, but you observe the person's performance. Sometimes the performance does not indicate true competence:illness, an emotional distraction, test anxiety, or other student-related reliability factors could affect performance, thereby providing an unreliable measure of actual competence.

    So, the one important principle for assessing a learner’s competence is to consider the fallibillity of the results of a single performance, such as that produced in a test. As with any attempt at measurement, it is your obligation as a teacher to triangulate the measurements: consider at least two (or more) performance or contexts before drawing a conclusion. That could take the form of one or more of the following designs :
    • Several tests that are combined to form an assessment
    • A single test with multiple test tasks to account for learning styles and performance variables
    • In-class and extra-class graded work
    • Alternative forms of assessment (e.g., journal, portofolio)
    Multiple measures will always give you a more reliable and valid assessment than a single measure.
    The second principle is we must rely as much as possible on observable performance in our assessments of students.

    Basic Types of Listening
    Designing appropriate assessment tasks in listening begin with the specifiation of objectives, or criteria. The bjectives may be classified in terms of several types of listening performance.
    The 4 stages in processing to flash through your brain:
    1. Comprehending of surface structure elements such as phonemes, words, intonation, or a grammatical category
    2. Understanding of pragmatic context
    3. Determining of pragmatic context
    4. Developing the gist, a global or comprehensive understanding
    From these stages, we can derive four commonly identified types of listening performance such as;
    1. Intensive
    • Listening dor perception of the components (phonemes, words, intonation, etc) of a larger stretch of language.
    2. Responsitive
    • Listening to a relatively short stretch of language ( a greeting, question, command, etc) in order to make an equally short response.
    3. Selective
    • Processing stretches of discourse such as short monologues for several minutes in order to scan for certain information. The purpose of such performance is to comprehend designated information in a context of longer stretches of spoken language (such as classroom directions from a teacher, stories, news, etc). Assessment task in selective listening could ask student, for example, to listen for name, numbers, directions, or certain facts and events.
    4. Extensive
    • Extensive performance ranges from listening to lengthy lectures to listening to a conversation and deriving a comprehensive message or purpose. Listening for the gist, for the main idea, and making inferences are all part of extensive listening.

    Micro- and Macroskills of Listening
    Microskills are attending to the smaller bits and chunks of language while macroskills are focusing on the larger elements. It is provided 17 different objectives to assess in listening.

    8 reasons for what makes listening difficult :
    1. Clustering : attending to appropriate chunks of language - phrases, clauses, constituents.
    2. Redundancy : recognizing the kinds of repetitions, rephrasing, elaborations, and insertions that unrehearsed spoken language often contains, and benefiting from that recognition.
    3. Reduced forms : understanding the reduced forms that may not have been a part of an English learner's past learning experiences in classes where only formal textbook language has been presented.
    4. Performance variables : being able to weed out hesitations, false starts, pauses, and corrections in natural speech.
    5. Colloquial language : comprehending idioms, slang, reduced forms, shared cultural knowledge.
    6. Rate of delivery : keeping up with the speed of delivery, processing automatically as the speaker continues.
    7. Stress, rhythm, and intonation : correctly understanding prosodic elements of spoken language, which is almost much more difficult that understanding the smaller phonological bits and pieces.
    8. Interaction : managing the interactive flow of language from listening to speaking to listening, etc.
    Designing Assessment Tasks
    - Investing Listening
    • Recognizing Phonological and Morphological Elements
    • Pharaprase recognition
    - Reponsive Listening
    - Selective Listening
    • Listening Cloze
    • Information Transfer
    • Sentence Repetition
    - Extensive Listening
    • Dictation
    • Communicative Stimulus-Response Tasks
    • Authentic Listening Tasks
    You can download this material for your presentation here.
    Further reading :
    Brown, D. (2004). Language Assessment: Principles and Classroom Practices. New York: Pearson Longman.
  • Standards-Based Assessment

    A standardization test presupposes certain objectives, criteria, that are held constant across one form of the test to another. The criteria in large-scale standardized tests are designed to apply to a broad band of competencies that are usually not exclusive to one particular curriculum. A good standardized test is the product of a thorough process of empirical research and development.

    Advantages of standardized testing includes , foremost a ready –made previously validated product that frees the teacher from having to spend hours creating a test.

    Disadvantages on the inappropriate use of test for example , using an overall proficiency test as an achievement test simply because of the convience of the standardization. Another disadvantage is the potential misunderstanding of the difference between direct and indirect testing.

    In order to develop a standardized test we have to revise an existing test, adapt or expand an existing test or even create a smaller-scale standardized test for program you are teaching in.
    Moreover, to evaluate and develop a classrom test we have to :

    Determine the purpose and objectives of the test
    Let’s look at the three test ;
    1. The purpose of the TOEFL is to evaluate the English proficiency of people whose native language is not english. It is designed to help institutions to make valid decisions concerning english proficiency in terms of requirements
    2. The ESLPT is designed by university faculty and staff to place student into courses in oral or grammar editing.
    3. The GET is a test to determine a prospective students –both native or non native speakers in their writting ability to enter graduate level of courses in a programs.
    Design test specifications
    Decisions need to be made on how to go about designing the specifications of the test. Before specs can be adressed, a comprehensive program of research must identify a set of construct underlying the test itself.

    Design, select, and arrange test
    once specs for a standardized test have been stipulated, sometimes never ending task of designing, selecting, and arranging items begins. The specs act much like a blue print in determining the number and types of items to be created

    Specify scoring and reporting formats
    A systematic assembly of test items in preselected arrangements and sequences, all of which are validated to conform to an expected difficulty level, should yield a test that can then be scored accurately and reported back to test-takers and institutons efficiently.

    Perform ongoing construct validation studies
    There is no standardized instrument is expected to be used repeatdly without a rigorous program of ongoing construct validation. Any standardized test once develop, must be accompanied by systematic periodic corroboration of its effectiveness and by steps toward its improvement.

    If you are looking for slide shows of this topic, you can download it here (google drive)

    Further reading :
    Brown, D. (2004). Language Assessment: Principles and Classroom Practices. New York: Pearson Longman.
  • Designing Classroom Language Tests

    In order to design a test, there are five factors needs to taken into considered:
    1. purpose,
    2. objective,
    3. selecting and arrange items,
    4. types of scoring, grading, and
    5. feedback.

    Test Types
    Defining the purpose for the test will help the teacher choose the right kind of test, and it will also help the teacher to focus on the specific objectives of the test.
    There are five types of test :

    Language Aptitude Tests
    • This test is designed to measure capacity or general ability to learn a foreign language and ultimate success in that undertaking. Language aptitude tests are ostensibly designed to apply to the classroom learning of any language.
    Proficiency Tests
    • This test is not limited to any one course, curriculum, or single skill in the language; rather, it tests overall ability. Proficiency tests have traditionally consisted of standardized multiple-choice items on grammar, vocabulary, reading comprehension and aural comprehension.
    Placement Tests
    • A placement test usually, but not always, includes a sampling of the material to be covered in the various courses in a curriculum; a student's performance on the test should indicate the point at which the student will find material neither too easy nor too difficult but appropriately challenging.
    Diagnostic Tests
    • A diagnostic test is designed to diagnose specified aspects of a language. A test in pronunciation, for example, might diagnose the phonological features of English that are difficult for learners and should therefore become part of a curriculum.
    • Usually, such tests offer a checklist of features for the administrator (teacher) to use in pinpointing difficulties.
    Achievement Tests
    • This test is related directly to classroom lessons, units, or even a total curriculum. Achievement tests are limited to particular material addressed in a curriculum within a particular time frame and are offered after a course has focused on the objectives in question. Its also serve the diagnostic role of indicating what a student to continue to work on in the future, but the primary role of an achievement test is to determine whether course objectives have been met and appropriate knowledge and skills acquired by the end of a period of instruction.
    Simply, there are some practical steps to test construction:
    • Assessing Clear, Unambiguous Objectives by knowing the purpose of the test we’re creating
                 - Set clear and specific objectives.
    • Determine a simple and practical outline, tested skills, and decide forms of item types and tasks.
    • Devising Test Tasks by drafting the questions, revising the draft, request aid from colleague, and imagine yourself as a student who write for this test.
    • Designing Multiple-choice items by checking practicality, reliability and facility of cheating.

    Scoring, Grading, and Giving Feedback
    • Teacher’s assign letter grade level depend on i.e., culture, context, and relations of the English classroom and teachers expectation.
    • Even weight and points on each section.
    • After testing, students’ identification of success and challenge can be given.

    If you are looking for slide shows of this topic, you can download it here (google drive)

    Further reading :
    Brown, D. (2004). Language Assessment: Principles and Classroom Practices. New York: Pearson Longman.
  • Beyond Test : Alternatives In Assessment

    In this chapter, an important distinction was made between testing and assessing.

    Tests are formal procedures, usually administered within strict time limitations, to sample the performance of a test-taker in a specified domain.

    Assessment connotes a much broader concept in that most of the time when teachers are teaching, they are also assessing. Assessment includes all occasions from informal impromptu observations and comments up to and including tests.

    Early in the decade of the 1990s, in a culture of rebellion against the notion that all people and all skills could be measured by traditional tests, a novel concept emerged that began to be labeled "alternative" assessment.

    That concept was to assemble additional measures of students—portfolios, journals, observations, self-assessments, peer-assessments, and the like—in an effort to triangulate data about students.

    Brown and Hudson (1998) noted that to speak of alternative assessments is counterproductive because the term implies something new and different that may be "exempt from the requirements of respon­sible test construction" (p. 657). So they proposed to refer to "alternatives" in assess­ment instead. Their term is a perfect fit within a model that considers tests as a subset of assessment.

    The characteristics of alternatives in assessment
    • They require students to perform, create, produce or do something
    • They use real-word context or simulations
    • They are noinstrusive in that they extend the day to day classroom activities
    • They allow students to be asssesed on what they normally do in class
    • They use tasks that represent meaningful instructional activities
    • They focus on processes as well as products
    • They tap into higher-level thinking and problem solving skills
    Dilemma in standardized and alternatives in assessment
    Formal standardized tests are almost by definition highly practical, reli­able instruments. They are designed to minimize time and money on the part of test designer and test-taker, and to be painstakingly accurate in their scoring.

    Alternatives such as portfolios or conferencing with students on drafts of written work, or observations of learners over time all require considerable time and effort on the part of the teacher and the student.

    But the alternative techniques also offer markedly greater washback, are superior formative measures, and, because of their authenticity, usually carry greater face validity.

    Performance-based assessment
    Performance-based assessment implies productive, observable skills, such as speaking and writing, of content-valid tasks. Such performance usually, but not always, brings with it an air of authenticity—real-world tasks that students have had time to develop. It often implies an integration of language skills, perhaps all four skills in the case of project work.

    The characteristics of performance assesment :
    • Students make a constructed response
    • They engage in higher- order thinking , with open –ended tasks
    • Tasks are meaningful , engaging, and authenthic
    • Tasks call for the integration of language skills
    • Both process and product are assesed
    • Depth of a student’s mastery is emphasized over breadth
    Procedures for performance-based assessment
    Performance-based assessment procedures need to be treated with the same rigor as traditional tests. This implies that teachers should ;
    • State the overall goal of the performance,
    • Specify the objectives (criteria) of the performance in detail,
    • Prepare students for performance in stepwise progressions,
    • Use a reliable evaluation form, checklist, or rating sheet,
    • Treat performances as opportunities for giving feedback and provide that feedback systematically, and
    • If possible, utilize self- and peer-assessments judiciously.
    Portofolios
    A portopolio is a purposeful collection of students work that demonstrates students’ efforts, progress, and achievements in given areas (Genesee and Upshur, 1996).

    Portfolios include materials such as,
    • Essays and compositions in draft and final forms;
    • Reports, project outlines;
    • Audio and/or video recordings of presentations, demonstrations, etc.;
    • Journals, diaries, and other personal reflections;
    • Tests, test scores, and written homework exercises;
    • Self- and peer-assessments--comments, evaluations, and checklists.
    Attributes of portofolios
    Gottlieb (1995) suggested a developmental scheme for considering the nature and purpose of portfolios, using the acronym CRADLE to designate six possible attributes of a portfolio:

    Collecting       : an expression of students' lives and identities.
    Reflecting       : thinking about experiences and activities.
    Assessing       : evaluating quality and development over time. 
    Documenting  : demonstrating student achievement.
    Linking             : connecting student and teacher, parent, community, and peer
    Evaluating      : generating responsible outcomes.
    Steps and guidelines
    1. State objectives clearly
    2. Give guidelines on what materials to include
    3. Communicate assesment criteria to students
    4. Designate time within the curriculum for portfolio development.
    5. Establish periodic schedules for review and conferencing.
    6. Designate an accessible place to keep portfolios.
    7. Provide positive washback when giving final assessments.
    It is inappropriate to reduce the personalized and creative process of compiling a portfolio to a number or letter grade. Instead, teachers should offer a qualitative evaluation such a final appraisal of the work, with questions for self-assessment of a project, and a narrative evaluation of perceived strengths and weakness.

    Journals
    A journal is a log of one’s thought, feelings, reactions, assessments, ideas, or progress, toward goals, usually written with little attention to structure, form, o correctness.

    Journals obviously serve important pedagogical purposes : practice in the mechanics of writing, using writing as a thinking process, individualization, and communications with the teacher.

    Steps for journals
    1. Sensitively introduce students to the concept of journal writing.
    2. State the objective(s) of the journal: Language-learning logs, Grammar journals, Responses to readings, strategies-based learning logs, Self-assessment reflections, etc.
    3. Give guidelines on what kinds of topics to include.
    4. Carefully specify the criteria for assessing or grading journals. Effort as exhibited in the thoroughness of students' entries will no doubt be important. Also, the extent to which entries reflect the processing of course content might be considered. 
    5. Provide optimal feedback in your responses: cheerleading feedback, instructional feedback, or reality-check feedback.
    6. Designate appropriate time frames and schedules for review.
    7. Provide formative, washback-giving final comments.
    Conferences and interviews
    Conferences are not limited to drafts of written work. It must assume that the teacher plays the role of a facilitator and guide , not of an administrator of a formal assesment.

    A number of generic question that may be usefull to pose in conference are ;
    1. What did you like about this work?
    2. What do you think you did well?
    3. How does it show improvement from previous work? Can you show me the improvement?
    4. What did you do when you did not know a word that you want to write/say? (Genesee and Upshur, 1996).
    Guidelines for conferences and interviews
    1. Offer an initial atmosphere of warmth and anxiety-lowering (warm-up).
    2. Begin with relatively simple questions.
    3. Continue with level-check and probe questions, but adapt to the interviewee as needed.
    4. Frame questions simply and directly.
    5. Focus on only one factor for each question. Do not combine several objec­tives in the same question.
    6. Be prepared to repeat or reframe questions that are not understood.
    7. Wind down with friendly and reassuring dosing comments. 
    Observations
    Observation is a systematic, planned procedure for real-time, almost furtive recording of student verbal and nonverbal behavior. One of the objectives of such observation is to assess students without their awareness (and possible consequent anxiety) of the observation so that the naturalness of their linguistic performance is maximized.

    Potential observation foci
    • sentence-level oral production skills.
    • pronunciation of target sounds, intonation, etc.
    • grammatical features (verb tenses, question formation, etc.
    • discourse-level skills (conversation rules, turn-taking, and other macroskills)
    • interaction with classmates (cooperation, frequency of oral production)
    • frequency of student-initiated responses (whole class, group work)
    Steps for observations
    1. Determine the specific objectives of the observations
    2. Decide how many students will be observed at one time
    3. Set up the logistics for making unnoticed observations
    4. Design a system for recording observed performances
    5. Do not overestimate the number of different elements you can observe at one time
    6. Plan how many observations you will make
    7. Determine specifically how you will use the results
    Alternatives in observation
    Checklists are a viable alternative for recording observation results.

    The observer identifies an activity or episode and checks appropriate boxes along a grid. This grid refers to variables such as whole-class, group, and individual participation, linguistic competence (form, function, discourse, sociolinguistic), etc. Each variable has subcategories for better analysis.
    Rating scales have also been suggested for recording observations.

    One type of rating scale asks teachers to indicate the frequency of occurrence of target performance on a separate frequency scale (always = 5; never = 1).
    Another is a holistic assessment scale that requires an overall assessment within a number of categories (for example, vocabulary usage, grammatical correctness, fluency). 
    Self and peer assesment
    Self –assesment derives its theoritical justification from a number of well established principles of second language acquisition. The principle of autonomy is vital. It consists of the ability to set one's own goals both within and beyond the structure of a classroom curriculum, to pursue them without the presence of an external push, and to independently monitor that pursuit. Developing intrinsic motivation that comes from a self-propelled desire to excel is at the top of the list of successful acquisition of any set of skills.

    Peer-assesment appeals to similar principles , the most obvious of which is cooperative learning. Many people go through a whole regimen of education from kindergaten up through a graduate degree and never come to appreciate the value of collaboration in learning.

    Peer assesment is simply one arm of a plethora of tasks and procedures within the domain of learner-centered and collaboration education.

    Types of self and peer assessments
    • Assessment of a specific performance
    • Indirect assessment of general competence
    • Metacognitive assessment for setting goals
    • Socioaffective assessment
    • Student generated test
    Guidelines for self and peer assessments
    1. Tell students the purpose of the assessment.
    2. Define the task(s) clearly.
    3. Encourage impartial evaluation of performance or ability
    4. Ensure beneficial washback through follow-up tasks.
    Self- and peer-assessment tasks
    Listening Tasks
    listening to TV or radio broadcasts and checking comprehension with a partner
    listening to an academic lecture and checking yourself on a "quiz" of the content

    Speaking Tasks
    using peer checklists and questionnaires
    rating someone's oral presentation (holistically)

    Reading Tasks
    reading passages with self-check comprehension questions following
    taking vocabulary quizzes

    Writing Tasks
    revising written work on your own or with a peer (peer editing)
    proofreading

    If you are looking for slide shows of this topic, you can download it here (google drive)

    Source:
    Brown, D. (2004). Language Assessment: Principles and Classroom Practices. New York: Pearson Longman.
    Yamith J. Fandiño
    La Salle University
    Bogotá, Colombia
  • Principles of Language Assessment

    In testing a test, there are 5 cardinal criteria such as, practicality, reliability, validity, authenticity, and washback. We will look at each one, but with no priority order implied in the order of presentation.

    Practicality
    An effective test is pratical. This means that it
    • is not excessively expensive,
    • stays within appropriate time constraints,
    • is relatively easy to administer, and
    • has a scoring/evaluation procedure that is specific adn time-efficient

    Reliability
    A reliable test is consistent and dependable. If you give the same test to the same student or matched students on two different occasions, the test should yield the similar results. The issue of reliability of a test may best be addressed by considering a number of factors that may contribute to the unreliability of a test.

    A. Student-Related Reliability
    • The most common learner-related issue in reliability is caused by temporary illness, fatigue, anxiety and other physical or psychological factors, which may make an "observed" score deviate from one's "true" score.
    B. Rater Reliability
    • Human error, subjectivity and bias may enter into the scoring process. Inter-rater reliability occurs when two or more scorers yield inconsistent scores on the same test, possibly for lack of attention to scoring criteria, inexperience, inattention, or even preconceived biases.
    • Rater-reliability issues are not limited to contexts where two or more scorers are involved. Inter-rater reliability is a common occurrence for classroom teachers because of unclear scoring criteria, fatigue, bias toward particular "good" and "bad" students, or simple carelessness.
    C. Test Administration Reliability
    • Unreliability may also result from the conditions in which the test is administered. For example, the administration of a test of aural comprehension in which a tape recorder played items for comprehension, but because of street noise outside the building, students sitting next to windows could not hear the tape accurately. This was a clear case of unreliability caused by the conditions of the test administration.
    D. Test Reliability
    • Sometimes the nature of the test itself can cause measurement errors. If a test is too long, test-takers may become fatigued by the time they reach the later items and hastily respond incorrectly. Timed test may discriminate against students who do not perform well on a test with a time limit. We all know people who "know" the course material perfectly, but who are adversely affected by the presence of a clock ticking away. Poorly written test items may be a further source of test unreliability.

      Validity
      By far the most complex criterion of an effective test and arguably the most important principle is validity, "the extent to which inferences made from assessment results are appropriate, meaningful and useful in terms of the purpose of the assessment" (Gronlund, 1998).

      And how is the validity of a test established? There is no final, absolute measure of validity, but several different kinds of evidence may be invoked in support. By this, we will look at these five types of evidence below.

      1. Content-Related Evidence
      • If a test actually samples the subject matter about which conclusions are to be drawn, and if it requires the test-taker to perform the behavior that is being measured, it can claim content-related evidence of validity, often popularly referred to as content validity (e.g., Mousavi, 2002;Hughes, 2003).
      2. Criterion-Related Evidence
          Criterion-related evidence usually falls into one of two categories :

          a. Concurrent validity
      • A test has concurrent validity if its results are supported by other concurrent performance beyond the assessment itself. For example, the validity of a high score on the final exam of a foreign language course will be substantiated by actual proficiency in the language.
          b. Predictive validity
      • The predictive validity of an assessment becomes important in the case of placement tests, admissions assessment batteries, language aptitude tests and the like.
      3. Construct-related Evidence
      • This one does not play as large role for classroom teachers. Constructs may or may not be directly or empirically measured - their verification often requires inferential data.
      4. Consequential Validity
      • Consequential validity encompasses all the consequences of a test, including such considerations as its accuracy in measuring intended criteria, its impact on the preparation of test-takers, its effect on the learner, and the (intended and unintended) social sequences of a test's interpretation and use.
      5. Face Validity
      • "Face validity refers to the degree to which a test looks right and appears to measure the knowledge or abilities it claims to measure, based on the subjective judgment of the examinees who take it, the administrative personnel who decide on its use, and other psychometrically unsophisticated obeservers" (Mousavi, 2002).

      Authenticity
      Bachman and Palmer (1996) define authenticity as "the degree of correspondence of the characteristics of a given language test task to the features of target language task," and then suggest as agenda for identifying those target language task and for transforming them into valid test items.

      Washback
      Washback generally refers to the effects the test have on instruction in terms of how students prepare for the test. A little bit of washback may also help students through a specification of the numerical scores on the various subsections of the test.

      If you are looking for slide shows of this topic, you can download it here (google drive).

      Further reading :
      Brown, D. (2004). Language Assessment: Principles and Classroom Practices. New York: Pearson Longman.
    • WHAT WE DO

      We've been developing corporate tailored services for clients for 30 years.

      CONTACT US

      For enquiries you can contact us in several different ways. Contact details are below.

      Karya Nyata Ragam Kerajinan

      • Street :Road Street 00
      • Person :Person
      • Phone :+045 123 755 755
      • Country :POLAND
      • Email :contact@heaven.com

      Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.

      Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation.