the+overview

This overview is the 'how to' of using the guide, tools and information for your evaluation of mobile learning. = = flat =Purpose of The Guide=

The guide is designed to give you an explanation and road map of how to use 'the guide' for an evaluation of mobile learning. It offers a scaffolding and structure to plan and carry out your evaluation. There are tips and hints for each of the steps within the guide, that will assist you formulating an evaluation plan that has focus and clarity.

This overview will provide a definition of evaluation. Then mobile learning is explained in an educational context along with outlining three key articles of mobile learning frameworks. Guidance and information are provided to explain each step of the evaluation guide.

=Wiki Outline=

This is an outline of the tools and information provided in this wiki to support a mobile learning evaluation: mobilelearningevaluation.wikispaces.com

 * the template is a structural model of looking at three theoretical methodologies that underpin the guide. The template may also be used in your planning process for determining key aspects of your evaluation.
 * the tools provided are a combination of instruments that have been used in mobile learning evaluations: checklists, templates, questionnaires, portable usability lab, user observation protocol, report outline and Gannt charts.
 * the videos are a collection of; what is mobile learning? and what is evaluation?
 * the articles and the references provide you with the theoretical foundations and influences.
 * the bios and the individual reflections provide you with information about us and our thinking.

= What is evaluation? =

Evaluation, put simply, is the process by which people make value judgements about things. In the context of learning technology, these judgements usually concern the educational value of innovations, or the pragmatics of introducing novel teaching techniques and resources. Less frequent, but still important, are judgements about the costs of such innovations. (Judgements about ‘worth’, as opposed to ‘value’, in the terminology of Guba & Lincoln, 1981.) (Oliver 2000)

=What is mobile learning?=

Vavoula and Sharples (2009) describe mobile learning as often being 'defined in terms of the technology that mediates the learning experience: if the technology is mobile, so is the learning'. They explain that mobility is not 'an exclusive property of the technology, it also resides in the lifestyle of the learner, who in the course of everyday life moves from one context to another, switching locations, social groups, technologies and topics; and learning often takes place inconspicuously or is crammed in the short gaps between these transitions'. Vavoula and Sharples' view and definition of mobile learning is for formal education contexts, however they also consider that ' it is particularly pertinent to everyday, informal learning'.

Mobile learning can have different interpretations in different settings. It has often been related to e-learning and distance education and implies learning with mobile devices. In more recent years mobile learning includes and is expanding to learners who are not in a fixed classroom setting or predetermined location. Learners today are taking advantage of the learning opportunities offered by mobile technologies. Mobile learning is decreasing the limits on learning locations because they can be anywhere mobile devices can be used.

Learning settings have unlimited possibilities because convenience and accessibility is available anywhere there is online access. The learner who interacts. This is changing the focus on the mobility of the learners who interact with portable technologies and the teacher who can harness these engaging new environments.

=Evaluating Mobile Learning =

For this guide we have chosen the following theoretical methodologies as underpinning knowledge and frameworks for evaluating mobile learning. (see the articles for the full versions)
 * Mobile learning. Vavoula, G., & Sharples, M. (2009).
 * Mobile learning kit info. Belshaw, D. (2011)
 * Methods for evaluating mobile learning. Sharples (2009)

Vavoula, G., & Sharples, M. (2009) highlight the challenges of evaluating mobile learning:
 * 1) capturing and analysing learning in context and across contexts,
 * 2) measuring the processes and outcomes of mobile learning,
 * 3) respecting learner/participant privacy,
 * 4) assessing mobile technology utility and usability,
 * 5) considering the wider organisational and socio-cultural context of learning,
 * 6) assessing in/formality.

They also introduce an evaluation framework with three levels: >
 * 1) micro level concerned with usability,
 * 2) meso level concerned with the learning experience,
 * 3) macro level concerned with integration within existing educational and organisational contexts.

Belshaw (2011) suggests that 'for mobile learning the complexities surrounding evaluating the success of an initiative are often heightened because of the added difficulty of evaluating across various contexts. Belshaw includes Vavoula and Sharples (2008) argument that "in order to establish, document and evaluate learning within and across contexts" it is necessary to analyse:
 * physical setting and layout of the learning space (the 'where')
 * social setting (who, with whom, from whom)
 * learning objectives and outcomes (why and what)
 * learning methods and activities (how)
 * learning progress and history (when)
 * learning tools (how)

In Sharples' (2009) article 'Methods for evaluating mobile learning' there is an address to the issues in evaluating mobile learning that offers distinctive aspects of mobile learning. Sharples' has adapted a framework from Livingstone (2001) to show the distinction between 'whether the learning is initiated by the learner, or externally (e.g. teacher or a curriculum) and whether the learning process is managed by the learner or others'. This is demonstration by the table below.

I Evaluation frameworks are offered in the template as an integrated model that is an instrument for evaluating mobile learning. These methods are from:
 * Template based on Taylor Powell (1996)
 * Six steps to effective evaluation by JISC (2010)
 * Template for evaluation based on Reeves ( 2009)

Also you will notice that the steps of the evaluation and the colour icons follow the template based on Taylor Powell (1996).

=Steps to the evaluation guide=
 * To make it easier to identify each step of the evaluation guide coloured icons have been used.(see below)
 * These steps offer a description, explanation and context to assist you utilising the guide in the development of your mobile learning evaluation.
 * Follow the order in which the steps are presented within the guide and make note of the heading and information
 * Additional are pages the templates and the tools that have further resources.


 * ==1.== || ==[[image:mobilelearningevaluationguide/1Introbackgd.png width="383" height="90"]]== ||
 * ==2.== || ==[[image:mobilelearningevaluationguide/2Purpose_audience.png width="381" height="86"]]== ||
 * ==3.== || ==[[image:mobilelearningevaluationguide/3Focus_eval.png width="382" height="87"]]== ||
 * ==4.== || ==[[image:mobilelearningevaluationguide/4Collecting_info.png width="382" height="88"]]== ||
 * ==5.== || ==[[image:mobilelearningevaluationguide/5using_info.png width="382" height="87"]]== ||
 * ==6.== || [[image:mobilelearningevaluationguide/6manageing_eval.png width="380" height="88"]] ||

=1. Introduction and background=

This is the section to think about why you are wanting to evaluate mobile learning. Sharples, Taylor & Vavoula (2007) stated that; 'mobile learning is not simply a variant of e-learning enacted with portable devices, nor an extension of classroom learning into less formal settings'. They added that 'recent research has focused on how mobile learning creates new contexts for learning through interactions between people, technologies and settings, and on learning within an increasingly mobile society'.

Chesterton and Cummings (2007) explain that to define the nature of a project 'it is essential that there be a clear and comprehensive mapping of the project itself'. Within this process it is recommended that the following key elements are addressed:
 * focus of the project
 * scope of the project
 * intended outcomes
 * operational processes
 * <span style="font-family: Arial,Helvetica,sans-serif;">conceptual and theoretic framework
 * <span style="font-family: Arial,Helvetica,sans-serif;">context of the project
 * <span style="font-family: Arial,Helvetica,sans-serif;">key values

<span style="font-family: Arial,Helvetica,sans-serif;">From the Evaluation Cookbook (Harvey 1998) there is a fundamental question; 'will it be worth it?'. Critical is that the results will lead to action to improve the teaching and learning within the course or the institution, then the effort will be worthwhile.'

=2. Purpose and audience=

The purpose is the primary reason for doing the mobile learning evaluation. Chesterton and Cummings (2007) recommended that once there is a primary purpose this then allows for a project 'which can be designed and planned, even though it may have several other purposes'. They also suggest caution at this stage as 'it is a common problem in evaluation studies that they are expected to be all things to all people, whereas the reality is they have limited resources (time, funds, expertise) and thus can only focus on a limited range of purposes'.

Your audience are the stakeholders of your mobile learning evaluation and who information about the evaluation will be disseminated to. Use the guide to assist you to focus on the stakeholders in the categories of interest, investment, involvement. Chesterton and Cummings (2007) offer a definition of stakeholders that is useful: 'Stakeholders are individuals/groups/organisations that have something significant to gain or lose in relation to the project and therefore the evaluation. As such, their interests must be considered in evaluating the program.'

=3. Focusing on the evaluation=

<span style="font-family: Arial,Helvetica,sans-serif;">This step develops the focus of the mobile learning evaluation. In the guide there are many questions and instruments to assist your formulating the focus of your evaluation.

<span style="font-family: Arial,Helvetica,sans-serif;">The assist in clarifying you questions The Innovation Network's work from Technology evaluation planning step-by-step (2007) provide three categories of questions:
 * 1) <span style="font-family: Arial,Helvetica,sans-serif;">What did we do?
 * 2) <span style="font-family: Arial,Helvetica,sans-serif;">How well did we do it?
 * 3) <span style="font-family: Arial,Helvetica,sans-serif;">What difference did our program make? (What changes occurred because of our program?)

McNamara (2009) discusses three types of evaluation: [|http://managementhelp.org/evaluation/program-evaluation-guide.htm#anchor1316141]
 * 1) goals based
 * 2) process based
 * 3) outcomes based

McNamara also recommends that 'you should not design your evaluation approach simply by choosing which of the three types you will use' rather consider all the aspects and and considerations. <span style="background-color: #ffffff; color: #4e4e4e; display: block; font-family: Verdana,Arial,Helvetica,sans-serif; text-align: left;"> Sharples (2009) also provided this view; 'evaluation for policy makers needs to provide evidence of learning gains or changes, either through comparison with existing approaches, or by showing how mobile learning can create radically new opportunities, such as linking people in real and virtual worlds. A useful way to approach the evaluation, for any stakeholder, is to address usability (will it work?), effectiveness (is it enhancing learning?) and satisfaction (is it liked?). And the possible evaluation focus of:
 * What is the effect of mobile learning on the knowledge, skills and attitudes of learners?
 * How do learners perceive the effectiveness of mobile learning?

=4. Collecting the information=

As you will see from the guide, there are many instruments and methods for collecting data. It is at this stage of your planning that you need data collection specific to mobile learning. The Innovation Network (2007) say that this stage can 'be the most daunting'. They wisely offer this piece of advice; 'the goal in data collection is to minimise the number of collection instruments you use and maximise the amount of information you collect from each one!'

With this in mind think about these operational questions of data collection for your mobile learning evaluation:
 * Which methods will be least disruptive to your program and to those you serve?
 * Which methods can you afford and implement well?
 * Which methods are best suited to obtain information from your sources (considering cultural appropriateness and other contextual issues)?

Frechtling (2002) additional considerations are also worthwhile integrating into your data collection thinking:
 * obtaining necessary clearance and permission
 * consider the needs and sensitivities of the respondents
 * that data collectors are adequately trained, are objective and unbiased

<span style="font-family: Arial,Helvetica,sans-serif;">According to Taylor (2006), research strategies in the area of mobile learning need to be more adaptive, and include alternative approaches such as analysis of interaction logs and learner contributions to externalized constructions.

<span style="font-family: Arial,Helvetica,sans-serif;">Papadimetri (2007) also discusses mixed methods of; recorded video, audio transcripts, observation notes, artefacts produced by the learners, application screen-shots and attitude surveys.

=5. Using the information=

This is the time to think about the data you have gathered and how best to use this information. There are many aspects to this step, the guide provides much information and processes that will assist you to;
 * analyse the data both quantitative and qualitative,
 * interpret the data,
 * interpret your own evaluation analysis,
 * identify who will interpret the data and
 * how to communicate your data.

<span style="font-family: Arial,Helvetica,sans-serif;">McNamara (2009) recommends **<span style="font-family: Arial,Helvetica,sans-serif;">interpreting information **<span style="font-family: Arial,Helvetica,sans-serif;"> in these three steps:
 * 1) <span style="font-family: Arial,Helvetica,sans-serif;">Attempt to put the information in perspective, e.g., compare results to what you expected, promised results; management or program staff; any common standards for your services; original program goals (especially if you're conducting a program evaluation); indications of accomplishing outcomes (especially if you're conducting an outcomes evaluation); description of the program's experiences, strengths, weaknesses, etc. (especially if you're conducting a process evaluation).
 * 2) <span style="font-family: Arial,Helvetica,sans-serif;">Consider recommendations to help program staff improve the program, conclusions about program operations or meeting goals, etc.
 * 3) <span style="font-family: Arial,Helvetica,sans-serif;">Record conclusions and recommendations in a report document, and associate interpretations to justify your conclusions or recommendations.

=6. Managing the evaluation=

Managing the evaluation is about developing the evaluation project plan. This plan becomes the road map of how things are going to happen, who is going to do them and the timeline for getting the project activities started and completed. This road map incorporates all of your planning and decisions from steps 1 to 5.

It is having a project charter that encompasses some or all of the following components for an evaluation plan:
 * **project profile:** This includes the time frame of the project and what the evaluation project is responsible for doing and the rationale of the project. It names the any of the pertinent roles for this project i.e. the project sponsor, project advisor and project leader. Within this section linkages to any groups, national strategies or alignments are named.
 * **project goal:** This could also be the project purpose and objectives.
 * **scope of the project**: To name what the scope of the evaluation project is essential, more important is naming what the project 'is not' defines the scope of the evaluation project.
 * **terms of reference**: Provides the framework and scaffolding for the operations of the evaluation project and often includes the following categories:
 * stakeholders
 * operational workforce
 * competencies required
 * functions of the workforce
 * conflicts of interest registered
 * project contacts
 * **deliverables**: These are agreed key results and outcomes based on the decisions and processes from steps 1 to 5.
 * **risk management**: This ensures that risks are identified, assessed and management strategies planned to mitigate risks impacting negatively on the evaluation.
 * **project plan**: The plan outlines the deliverables and underpins this with any stages and phases and lists the actions and activities required inclusive of who is responsible, when and the reporting requirements. Inclusive here also are the protocols and procedures for any changes that become necessary within the evaluation.
 * **project budget**: This will encompass not only the financials, also the authorisation mechanisms and people, reporting requirements, funders and sponsorships.
 * **communication plan**: Will include an introduction, background, issues, objectives, key messages, strategies for communication and the identified key stakeholders to receive communications.
 * **appendices**: This will include any other relevant information and documents to the evaluation plan and project.

McNamara (2008 ) covers the myths of program evaluation where he articulates a common believe that evaluation is ‘about providing the success or failure of a program’ (p.1) and counterbalances this with the success of evaluation is ‘remaining open to continuing feedback and adjusting the program accordingly’.

In alignment with mobile learning evaluation <span style="font-family: Arial,Helvetica,sans-serif;">Sharples (2009) said that the 'evaluation of mobile learning poses particular challenges not only because it introduces novel technologies and modes of learning, but also because it can spread across many contexts and over long periods of time'.

<span style="font-family: Arial,Helvetica,sans-serif;">In conclusion evaluating mobile learning will not be so different from other evaluation projects other than settings can be anywhere, the technologies can be diverse, yet the instruments are common to most evaluations.

<span style="font-family: Arial,Helvetica,sans-serif;">(word count 2553)