Pre-conference workshop program: Sunday 15 September 2019

>>> DOWNLOAD a printable conference workshop program

View Thursday post-conference workshop program here.


8–9am REGISTRATION
9am–12:30pm WORKSHOP PROGRAM: First half of full day workshops
12:30–1:30pm LUNCH
1:30–5pm WORKSHOP PROGRAM: Second half of full day workshops; Afternoon half-day workshop

WORKSHOP DETAILS

Categories: 
A.
Foundational evaluation skills and capabilities 
B. New tools; approaches and ways of thinking   
C. Advanced evaluation topics


FULLY BOOKED! Rubrics-enhanced evaluation 

presented by Jane Davidson, Kate McKegg, Nan Wehipeihana    FULL DAY | CATEGORY: C

Evaluation, by definition, is fundamentally about asking and answering evaluative questions (i.e., questions about merit/quality, worth/value, and significance/importance). Not just 'What were the results?' but 'How good, valuable, and/or important were they?' Rubrics-Enhanced Evaluation is a powerful and flexible approach for

  1. clearly defining what constitutes 'good', 'valuable' or 'important' in a particular context, and why; and
  2. the systematic and transparent interpretation of qualitative and quantitative evidence relative to these well-grounded definitions.

This workshop is suitable for new and experienced evaluators alike. Topics include how to make your evaluation questions truly evaluative; what rubrics are and what they are for; different rubric designs and when to use them; rubric development processes for collaborative or independent evaluation; validation and field testing tips; and rubrics-enhanced evaluation reporting. Rubrics mesh well with a diverse range of evaluation approaches. They are particularly helpful when the most important outcomes are those least easily measured.

Jane Davidson is best known for pioneering the increasingly popular Evaluation Rubrics Methodology, along with her various other refreshingly practical evaluation frameworks and approaches. Originally from Aotearoa New Zealand, Jane is former Associate Director of The Evaluation Center at Western Michigan University, where she launched and directed the world's first fully interdisciplinary Ph.D. in Evaluation. She was 2005 recipient of the American Evaluation Association’s Marcia Guttentag Award, and serves as Honorary Principal Fellow at the University of Melbourne. Jane is currently based in Seattle, and is sought after internationally as a speaker, author, evaluation coach, workshop and webinar presenter, and creator of awesome evaluation frameworks and tools.

Kate McKegg is the director of The Knowledge Institute Ltd and a member of Kinnect Group as well as an indigenous led collective Tuakana Teina, based in the Waikato region of New Zealand. Kate has worked in evaluation, evaluation capacity building, research, policy and public sector management since the late 1980s.  She has a deep commitment to social and environmental justice and strives daily to decolonize her thinking and actions to support to indigenous colleagues in struggles for justice, sovereignty, healing and revitalization.

Nan Wehipeihana is the director of Research Evaluation Consultancy Ltd  and a member of Kinnect Group. Nan tribal affiliations are to Ngāti Tukorehe and Ngāti Raukawa, north of Wellington and to Ngāti Porou and Te Whānau-ā-Apanui on the East Coast of New Zealand. Nan specializes in evaluation with a focus on Māori (Indigenous New Zealanders) and building evaluation capacity with tribes and Māori organisations to evidence outcomes including cultural outcomes. By bringing the voices and views of Māori to government and funders, she aims to offer insight into Māori values, perspective and experiences for use in government, business and community contexts

> register


Theories of evaluation

presented by Brad Astbury   FULL DAY | CATEGORY: A, C

This workshop provides an overview of the origins and evolution of evaluation theory. Attention to theory in evaluation has focused predominantly on program theory and few evaluation practitioners have received formal training in evaluation theory. This workshop seeks to remedy this by introducing a framework for conceptualising different theories of evaluation and a set of criteria to support critical thinking about the practice-theory relationship in evaluation.

Participant will learn about:

  • the nature and role of evaluation theory
  • major theorist’s and their contribution
  • approaches to classifying evaluation theories
  • key ways in which evaluation theorist’s differ and what this means for practice
  • dangers involved in relying too heavily on any one particular theory, and
  • techniques for selecting and combining theories based on situational analysis.

Case examples will be used to illustrate why evaluation theory matters and how different theoretical perspectives can inform, shape and guide the design and conduct of evaluations in different practice settings.

This workshop aligns with competencies in the AES Evaluator’s Professional Learning Competency Framework. The identified domains are:

  • Domain 1 – Evaluative attitude and professional practice
  • Domain 2 – Evaluation theory
  • Domain 4 – Research methods and systematic inquiry

The workshop is designed for both new and experienced evaluators and commissioners of evaluation.

Brad Astbury is a Director of ARTD Consulting and works out of the Melbourne office. He has over 18 years’ experience in evaluation and applied social research and considerable expertise in combining diverse forms of evidence to improve both the quality and utility of evaluation. He has managed and conducted needs assessments, process and impact studies and theory-driven evaluations across a wide range of policy areas for industry, government, community and not-for-profit clients. Prior to joining ARTD in 2018, Brad worked for over a decade at the University of Melbourne, where he taught and mentored postgraduate evaluation students. 

> register


A gentle introduction to the collection and analysis of statistical data for evaluation

presented by Mark Griffin    FULL DAY | CATEGORY: A

A robust evaluation makes use of both qualitative and quantitative research methods. At the same time many people commissioning or conducting evaluations have little training or understanding of quantitative methods such as survey design and statistics. Indeed some colleagues may even face some anxiety thinking about such methods. This workshop is not intended to turn evaluation practitioners into hard-core data scientists, but the goal instead is to give evaluation practitioners the tools necessary to work productively and in close collaboration with data scientists and to give evaluation commissioners the tools necessary to scope out projects involving statistical components, to assess the value of subsequent bids from potential statistical consultants, and to maximise the potential for statistical work to lead to true insights and business value within the commissioner’s organisation. With such a goal this workshop will not focus on the specific technical intricacies of the mathematical techniques discussed, instead it will focus on a series of case studies where statistical methods have been applied in a sophisticated manner (and in so doing will introduce a range of statistical methods and the types of research questions that can be asked using each method, and some basic guidelines in proper statistical practice such as the importance of checking statistical properties of a dataset prior to conducting the analysis).

During this workshop the following topics will be presented:

  • Designing an evaluation project including a statistical component
  • Survey design and data collection
  • Statistical methods that could be employed
  • Writing up and presenting statistical results

Approximately two thirds of the workshop will be spent in the presentation of PowerPoint slides, and the remaining third will be spent dividing into small groups where each group will discuss how the statistical methods presented have or could be used within the projects that the participants are involved in.

Mark Griffin is the Founding Director of Insight Research Services Associated (www.insightrsa.com), and holds academic appointments at the University of Queensland and the University of Sydney. Mark serves on the Executive Committee for the Statistical Society of Australia, and is the Founder and Co-Chair of the Special Interest Group for Business Analytics within the International Institute of Business Analysis. Mark has been the primary statistician for a number of large surveys (including a survey of 140,000 parents receiving the Positive Parenting Program in Queensland), and Insight is a member of a number of government panels including that for the Therapeutic Goods Association within the Australian Department of Health. Since the formation of Insight Mark has presented over 90 two-day and 15 five-day workshops in statistics around Australia, and has recently started an annual international speaking tour

> register


Designing social impact evaluations

presented by Ruth Aston, Rachel Aston, Timoci O'Connor    FULL DAY | CATEGORY: B

The number of complex social programs aiming to achieve social impact continues to grow. However, evaluating their impact is not simple; the challenge is ensuring that evaluation designs produce evidence to satisfy funding requirements and simultaneously maintain rigour to evaluate the impact of complex efforts (Gargani & Donaldson, 2011). We will introduce key measurement issues, and existing approaches to evaluating social impact—delineating their strengths and limitations at various stages throughout the program life cycle [needs assessment, program design, implementation, program maintenance, etc]. We will also briefly address the role of frameworks and schools of thought often associated with social impact including but not limited to Collective Impact, co-design, implementation science, systems thinking and social complexity. Participants will then apply this knowledge in a series of exercises including developing social impact evaluation questions, and using these questions to consider and design an evaluation methodology. A guest speaker from an organisation implementing social impact programs will facilitate a discussion concerning considerations for commissioning and funding social impact evaluations. Lecture, group discussions, and exercises will be used in this 1-day workshop. An evaluation case study will be provided; however, participants are encouraged to bring a case from their own practice.

At the end of the workshop participants will be able to: 

  1. Develop social impact evaluation questions 
  2. Categorise social impact evaluation questions based on the program life cycle
  3. Identify key methodological design considerations for social impact evaluation questions 
  4. Select and justify an approach to assessing impact in an evaluation case study
  5. Describe existing approaches to evaluating social impact and identify their strengths and weaknesses
  6. Describe and explain how formative and summative evaluation can be applied in a social impact evaluation
  7. Critically discuss practical considerations for the design and implementation of a social impact evaluation in a real-world setting

Ruth Aston is a Research Fellow and Lecturer at the Centre for Program Evaluation at The University of Melbourne. Ruth is currently working on several evaluations in education and public health, including the Evaluation of the Indigenous Australians' Health Program and the Evaluation of the Differentiated Support for School Improvement in Victoria. Ruth has completed her PhD and investigated social change in public health. Focussing particularly on the development of criteria for articulating success in achieving outcomes in complex multi-level, multi-site interventions in communities aiming to achieve social change in health outcomes. This is where her research interests lie, and she is testing measurement models to capture and test criteria for successful implementation (and effectiveness) of complex interventions (policy, programs, initiatives) to achieve enduring social change.

Rachel Aston is an experienced social researcher and evaluator at ARTD Consultants. She brings over eight years’ experience conducting research and evaluation for government, NGOs and in the higher education sector. Rachel has a high level of expertise in qualitative and mixed-methods research, and evaluation methodology, with an academic background in anthropology and social research. Her background enables her to construct evaluation designs that are responsive to client needs and stakeholder engagement. She is particularly interested in the use of participatory and empowerment evaluation approaches to increase the accuracy and utilisation of evaluation findings. Rachel is skilled in translating and communicating evaluation findings to a wide audience to ensure that they are used to inform decision-making, program design and implementation.

Timoci O'Connor has evaluation and research expertise in the fields of international development, public health and education, particularly in the Pacific region, New Zealand, Australia and Southeast Asia. Tim has over 10 years' experience working with diverse populations and in various settings from community, private, government, NGOs, academia, and philanthropic organisations. Tim also coordinates, lectures and tutors into a range of subjects including Mixed Methods Research & Evaluation, Qualitative Methods for Evaluation, Health Program Evaluation, Developing Evaluation Capacity and Relating Health & Learning. Timoci is pursuing his PhD study exploring the use and feedback of evaluative information through information communication technologies.

> register


Developing Monitoring and Evaluation Frameworks

presented by Ian Patrick and Anne Markiewicz    FULL DAY | CATEGORY: A

The development and implementation of Monitoring and Evaluation Frameworks at strategy, program and project levels are important processes to adopt in order to provide an indication of results achieved and to resource organisational learning. The Monitoring and Evaluation Framework defines the parameters of routine monitoring and periodic evaluation that will take place over the life of a program or other initiative. The workshop provides participants with useful, step by step practical guidance for developing a Monitoring and Evaluation Framework, supported by relevant background and theory. It presents a clear and staged conceptual model, discusses design and implementation issues and considers any barriers or impediments, with strategies for addressing these. Participants will learn the format and approach for developing a Monitoring and Evaluation Framework, the range of techniques and skills involved in its design and implementation and develop an appreciation of the parameters of the tasks involved and how to approach them.

Participants will learn:

  • The value and purpose of investing in and developing Monitoring and Evaluation Frameworks;
  • The participatory approach and processes involved in developing such frameworks;
  • The steps and stages involved and the suggested ‘Table of Contents’ for constructing a Monitoring and Evaluation Framework.

The trainers will alternate between use of a PowerPoint presentation and small group interactive work. The workshop follows a case-study approach and involves participants in the development of a Monitoring and Evaluation Framework for the case-study. In this way, the approach to training is participatory and hands-on while still conveying sufficient theory and context. 

Ian Patrick is an independent consultant and Director of Ian Patrick and Associates. His career as an evaluator extends over around 20 years and includes a focus on both Australia and the Asia Pacific region. He has broad experience across different social sectors such as health, education, law and justice, community development, and human rights and Indigenous issues. Ian has worked with a range of organisations and programs in developing monitoring and evaluation systems, and has conducted evaluation-related training programs including on Introduction to Evaluation, Participatory Approaches in Evaluation, and Developing Monitoring and Evaluation Frameworks. He was awarded the AES Best Evaluation Policy and System Award in 2012 for the Monitoring and Evaluation Framework, Mongolia Australia Scholarship Program. Ian is an Honorary Senior Fellow, Development Studies Program at the University of Melbourne and was previously the leader of the evaluation practice area at the International NGO Training and Research Centre, UK.

Anne Markiewicz is an independent evaluation consultant and Director of Anne Markiewicz and Associates. She has developed a significant number of Monitoring and Evaluation Frameworks and completed a wide range of evaluation projects for government departments, non-government organisations and international agencies. Anne’s practice as an evaluator highlighted the need for early evaluation planning to provide a clear and agreed focus, develop a theory-based and evaluation-led approach, and ensure data availability to support the production of credible and useful findings that furnish and support organisational learning, program improvement and wider accountability. Anne has increasingly focused her career in designing and delivering training on the development of Monitoring and Evaluation Frameworks.  Anne has received a number of awards for excellence in evaluation and is a Fellow of the Australasian Evaluation Society. Anne Markiewicz and Ian Patrick are  co-authors of the text book ‘Developing Monitoring and Evaluation Frameworks’ published through SAGE. This training program is based on that text book and the training has been delivered extensively in the Australia-Pacific region and for the American Evaluation Association.

> register


Managing sustainable evaluation functions in dynamic organisational contexts

presented by Penny Hawkins    FULL DAY | CATEGORY: C

The purpose of this workshop is for participants to learn about what makes evaluation functions sustainable and how to lead and manage evaluation in different organisations and contexts to enhance its effectiveness and sustainability. Participants will learn about

  • the factors that affect evaluation sustainability
  • management approaches specific to evaluation and related activities
  • organisational dynamics and evaluation
  • how context, including political contexts, can affect the sustainability of evaluation
  • the role of evaluation in a dynamic and fast changing world. 

The workshop will provide an opportunity to explore the issues that affect evaluation systems in organisations; identify the components of sustainable evaluation systems and how these support the evaluation function to adapt to challenges as these emerge; and also work through approaches to the development of sustainable evaluation systems in particular organisations. Participants will gain insights through a combination of short presentations by the workshop facilitator and the shared practice wisdom of participants, resulting in a broader and deeper understanding of how evaluation systems either succeed or become unsustainable. 

Penny Hawkins is an evaluation specialist with three decades of experience in international development evaluation and public policy across a wide range of sectors and organisations. Penny is the former Head of Evaluation at the UK Department for International Development (DFID). Over the past 20 years she has held evaluation leadership and management roles in government and philanthropic sectors, including at The Rockefeller Foundation in the USA and as Head of Evaluation for the Ministries of Foreign Affairs and Trade and Social Development, New Zealand. Penny has also held international evaluation leadership roles including as Chair of the OECD-DAC Network on Development Evaluation (2013-16) and as a former President and current Fellow of the AES. Penny is the founder and CEO of a woman-owned and operated international development evaluation consultancy with offices in New Zealand and Scotland. She works internationally with philanthropic, multi-lateral and private sector organisations to align strategy with impact and develop relevant and effective monitoring, evaluation and learning systems.

> register