>>> DOWNLOAD a printable conference workshop program
|9am–12:30pm WORKSHOP PROGRAM|
Rubrics-enhanced evaluation (full day)
Jane Davidson, Kate McKegg, Nan Wehipeihana> Details
Theories of evaluation (full day)
A gentle introduction to the collection and analysis of statistical data for evaluation
Designing social impact evaluation
Developing Monitoring and Evaluation Frameworks (full day)
Managing sustainable evaluation functions in dynamic organisational contexts (full day)
Learning and applying Systems Evaluation Theory (SET): From the classroom to the Harbour (full day)
|1:30–5pm WORKSHOP PROGRAM|
Davidson, McKegg, Wehipeihana continued
Aston, Aston, O'Connor continued
Atkinson, Keogh, Renger continued
Fundamentals of presentation technique (half day)
A. Foundational evaluation skills and capabilities
B. New tools; approaches and ways of thinking
C. Advanced evaluation topics
presented by Jane Davidson, Kate McKegg, Nan Wehipeihana FULL DAY | CATEGORY: C
Evaluation, by definition, is fundamentally about asking and answering evaluative questions (i.e., questions about merit/quality, worth/value, and significance/importance). Not just 'What were the results?' but 'How good, valuable, and/or important were they?' Rubrics-Enhanced Evaluation is a powerful and flexible approach for
- clearly defining what constitutes 'good', 'valuable' or 'important' in a particular context, and why; and
- the systematic and transparent interpretation of qualitative and quantitative evidence relative to these well-grounded definitions.
This workshop is suitable for new and experienced evaluators alike. Topics include how to make your evaluation questions truly evaluative; what rubrics are and what they are for; different rubric designs and when to use them; rubric development processes for collaborative or independent evaluation; validation and field testing tips; and rubrics-enhanced evaluation reporting. Rubrics mesh well with a diverse range of evaluation approaches. They are particularly helpful when the most important outcomes are those least easily measured.
Jane Davidson is best known for pioneering the increasingly popular Evaluation Rubrics Methodology, along with her various other refreshingly practical evaluation frameworks and approaches. Originally from Aotearoa New Zealand, Jane is former Associate Director of The Evaluation Center at Western Michigan University, where she launched and directed the world's first fully interdisciplinary Ph.D. in Evaluation. She was 2005 recipient of the American Evaluation Association’s Marcia Guttentag Award, and serves as Honorary Principal Fellow at the University of Melbourne. Jane is currently based in Seattle, and is sought after internationally as a speaker, author, evaluation coach, workshop and webinar presenter, and creator of awesome evaluation frameworks and tools.
Kate McKegg is the director of The Knowledge Institute Ltd and a member of Kinnect Group as well as an indigenous led collective Tuakana Teina, based in the Waikato region of New Zealand. Kate has worked in evaluation, evaluation capacity building, research, policy and public sector management since the late 1980s. She has a deep commitment to social and environmental justice and strives daily to decolonize her thinking and actions to support to indigenous colleagues in struggles for justice, sovereignty, healing and revitalization.
Nan Wehipeihana is the director of Research Evaluation Consultancy Ltd and a member of Kinnect Group. Nan tribal affiliations are to Ngāti Tukorehe and Ngāti Raukawa, north of Wellington and to Ngāti Porou and Te Whānau-ā-Apanui on the East Coast of New Zealand. Nan specializes in evaluation with a focus on Māori (Indigenous New Zealanders) and building evaluation capacity with tribes and Māori organisations to evidence outcomes including cultural outcomes. By bringing the voices and views of Māori to government and funders, she aims to offer insight into Māori values, perspective and experiences for use in government, business and community contexts
Theories of evaluation
presented by Brad Astbury FULL DAY | CATEGORY: A, C
This workshop provides an overview of the origins and evolution of evaluation theory. Attention to theory in evaluation has focused predominantly on program theory and few evaluation practitioners have received formal training in evaluation theory. This workshop seeks to remedy this by introducing a framework for conceptualising different theories of evaluation and a set of criteria to support critical thinking about the practice-theory relationship in evaluation.
Participant will learn about:
- the nature and role of evaluation theory
- major theorist’s and their contribution
- approaches to classifying evaluation theories
- key ways in which evaluation theorist’s differ and what this means for practice
- dangers involved in relying too heavily on any one particular theory, and
- techniques for selecting and combining theories based on situational analysis.
Case examples will be used to illustrate why evaluation theory matters and how different theoretical perspectives can inform, shape and guide the design and conduct of evaluations in different practice settings.
This workshop aligns with competencies in the AES Evaluator’s Professional Learning Competency Framework. The identified domains are:
- Domain 1 – Evaluative attitude and professional practice
- Domain 2 – Evaluation theory
- Domain 4 – Research methods and systematic inquiry
The workshop is designed for both new and experienced evaluators and commissioners of evaluation.
Brad Astbury is a Director of ARTD Consulting and works out of the Melbourne office. He has over 18 years’ experience in evaluation and applied social research and considerable expertise in combining diverse forms of evidence to improve both the quality and utility of evaluation. He has managed and conducted needs assessments, process and impact studies and theory-driven evaluations across a wide range of policy areas for industry, government, community and not-for-profit clients. Prior to joining ARTD in 2018, Brad worked for over a decade at the University of Melbourne, where he taught and mentored postgraduate evaluation students.
A gentle introduction to the collection and analysis of statistical data for evaluation
presented by Mark Griffin FULL DAY | CATEGORY: A
A robust evaluation makes use of both qualitative and quantitative research methods. At the same time many people commissioning or conducting evaluations have little training or understanding of quantitative methods such as survey design and statistics. Indeed some colleagues may even face some anxiety thinking about such methods. This workshop is not intended to turn evaluation practitioners into hard-core data scientists, but the goal instead is to give evaluation practitioners the tools necessary to work productively and in close collaboration with data scientists and to give evaluation commissioners the tools necessary to scope out projects involving statistical components, to assess the value of subsequent bids from potential statistical consultants, and to maximise the potential for statistical work to lead to true insights and business value within the commissioner’s organisation. With such a goal this workshop will not focus on the specific technical intricacies of the mathematical techniques discussed, instead it will focus on a series of case studies where statistical methods have been applied in a sophisticated manner (and in so doing will introduce a range of statistical methods and the types of research questions that can be asked using each method, and some basic guidelines in proper statistical practice such as the importance of checking statistical properties of a dataset prior to conducting the analysis).
During this workshop the following topics will be presented:
- Designing an evaluation project including a statistical component
- Survey design and data collection
- Statistical methods that could be employed
- Writing up and presenting statistical results
Approximately two thirds of the workshop will be spent in the presentation of PowerPoint slides, and the remaining third will be spent dividing into small groups where each group will discuss how the statistical methods presented have or could be used within the projects that the participants are involved in.
Mark Griffin is the Founding Director of Insight Research Services Associated (www.insightrsa.com), and holds academic appointments at the University of Queensland and the University of Sydney. Mark serves on the Executive Committee for the Statistical Society of Australia, and is the Founder and Co-Chair of the Special Interest Group for Business Analytics within the International Institute of Business Analysis. Mark has been the primary statistician for a number of large surveys (including a survey of 140,000 parents receiving the Positive Parenting Program in Queensland), and Insight is a member of a number of government panels including that for the Therapeutic Goods Association within the Australian Department of Health. Since the formation of Insight Mark has presented over 90 two-day and 15 five-day workshops in statistics around Australia, and has recently started an annual international speaking tour
Designing social impact evaluations
presented by Ruth Aston, Rachel Aston, Timoci O'Connor FULL DAY | CATEGORY: B
The number of complex social programs aiming to achieve social impact continues to grow. However, evaluating their impact is not simple; the challenge is ensuring that evaluation designs produce evidence to satisfy funding requirements and simultaneously maintain rigour to evaluate the impact of complex efforts (Gargani & Donaldson, 2011). We will introduce key measurement issues, and existing approaches to evaluating social impact—delineating their strengths and limitations at various stages throughout the program life cycle [needs assessment, program design, implementation, program maintenance, etc]. We will also briefly address the role of frameworks and schools of thought often associated with social impact including but not limited to Collective Impact, co-design, implementation science, systems thinking and social complexity. Participants will then apply this knowledge in a series of exercises including developing social impact evaluation questions, and using these questions to consider and design an evaluation methodology. A guest speaker from an organisation implementing social impact programs will facilitate a discussion concerning considerations for commissioning and funding social impact evaluations. Lecture, group discussions, and exercises will be used in this 1-day workshop. An evaluation case study will be provided; however, participants are encouraged to bring a case from their own practice.
At the end of the workshop participants will be able to:
- Develop social impact evaluation questions
- Categorise social impact evaluation questions based on the program life cycle
- Identify key methodological design considerations for social impact evaluation questions
- Select and justify an approach to assessing impact in an evaluation case study
- Describe existing approaches to evaluating social impact and identify their strengths and weaknesses
- Describe and explain how formative and summative evaluation can be applied in a social impact evaluation
- Critically discuss practical considerations for the design and implementation of a social impact evaluation in a real-world setting
Ruth Aston is a Research Fellow and Lecturer at the Centre for Program Evaluation at The University of Melbourne. Ruth is currently working on several evaluations in education and public health, including the Evaluation of the Indigenous Australians' Health Program and the Evaluation of the Differentiated Support for School Improvement in Victoria. Ruth has completed her PhD and investigated social change in public health. Focussing particularly on the development of criteria for articulating success in achieving outcomes in complex multi-level, multi-site interventions in communities aiming to achieve social change in health outcomes. This is where her research interests lie, and she is testing measurement models to capture and test criteria for successful implementation (and effectiveness) of complex interventions (policy, programs, initiatives) to achieve enduring social change.
Rachel Aston is an experienced social researcher and evaluator at ARTD Consultants. She brings over eight years’ experience conducting research and evaluation for government, NGOs and in the higher education sector. Rachel has a high level of expertise in qualitative and mixed-methods research, and evaluation methodology, with an academic background in anthropology and social research. Her background enables her to construct evaluation designs that are responsive to client needs and stakeholder engagement. She is particularly interested in the use of participatory and empowerment evaluation approaches to increase the accuracy and utilisation of evaluation findings. Rachel is skilled in translating and communicating evaluation findings to a wide audience to ensure that they are used to inform decision-making, program design and implementation.
Timoci O'Connor has evaluation and research expertise in the fields of international development, public health and education, particularly in the Pacific region, New Zealand, Australia and Southeast Asia. Tim has over 10 years' experience working with diverse populations and in various settings from community, private, government, NGOs, academia, and philanthropic organisations. Tim also coordinates, lectures and tutors into a range of subjects including Mixed Methods Research & Evaluation, Qualitative Methods for Evaluation, Health Program Evaluation, Developing Evaluation Capacity and Relating Health & Learning. Timoci is pursuing his PhD study exploring the use and feedback of evaluative information through information communication technologies.
Developing Monitoring and Evaluation Frameworks
presented by Ian Patrick and Anne Markiewicz FULL DAY | CATEGORY: A
The development and implementation of Monitoring and Evaluation Frameworks at strategy, program and project levels are important processes to adopt in order to provide an indication of results achieved and to resource organisational learning. The Monitoring and Evaluation Framework defines the parameters of routine monitoring and periodic evaluation that will take place over the life of a program or other initiative. The workshop provides participants with useful, step by step practical guidance for developing a Monitoring and Evaluation Framework, supported by relevant background and theory. It presents a clear and staged conceptual model, discusses design and implementation issues and considers any barriers or impediments, with strategies for addressing these. Participants will learn the format and approach for developing a Monitoring and Evaluation Framework, the range of techniques and skills involved in its design and implementation and develop an appreciation of the parameters of the tasks involved and how to approach them.
Participants will learn:
- The value and purpose of investing in and developing Monitoring and Evaluation Frameworks;
- The participatory approach and processes involved in developing such frameworks;
- The steps and stages involved and the suggested ‘Table of Contents’ for constructing a Monitoring and Evaluation Framework.
The trainer will alternate between use of a PowerPoint presentation and small group interactive work. The workshop follows a case-study approach and involves participants in the development of a Monitoring and Evaluation Framework for the case-study. In this way, the approach to training is participatory and hands-on while still conveying sufficient theory and context.
Ian Patrick is an independent consultant and Director of Ian Patrick and Associates. His career as an evaluator extends over around 20 years and includes a focus on both Australia and the Asia Pacific region. He has broad experience across different social sectors such as health, education, law and justice, community development, and human rights and Indigenous issues. Ian has worked with a range of organisations and programs in developing monitoring and evaluation systems, and has conducted evaluation-related training programs including on Introduction to Evaluation, Participatory Approaches in Evaluation, and Developing Monitoring and Evaluation Frameworks. He was awarded the AES Best Evaluation Policy and System Award in 2012 for the Monitoring and Evaluation Framework, Mongolia Australia Scholarship Program. Ian is an Honorary Senior Fellow, Development Studies Program at the University of Melbourne and was previously the leader of the evaluation practice area at the International NGO Training and Research Centre, UK.
Anne Markiewicz is an independent evaluation consultant and Director of Anne Markiewicz and Associates. She has developed a significant number of Monitoring and Evaluation Frameworks and completed a wide range of evaluation projects for government departments, non-government organisations and international agencies. Anne’s practice as an evaluator highlighted the need for early evaluation planning to provide a clear and agreed focus, develop a theory-based and evaluation-led approach, and ensure data availability to support the production of credible and useful findings that furnish and support organisational learning, program improvement and wider accountability. Anne has increasingly focused her career in designing and delivering training on the development of Monitoring and Evaluation Frameworks. Anne has received a number of awards for excellence in evaluation and is a Fellow of the Australasian Evaluation Society. Anne Markiewicz and Ian Patrick are co-authors of the text book ‘Developing Monitoring and Evaluation Frameworks’ published through SAGE. This training program is based on that text book and the training has been delivered extensively in the Australia-Pacific region and for the American Evaluation Association.
Managing sustainable evaluation functions in dynamic organisational contexts
presented by Penny Hawkins FULL DAY | CATEGORY: C
The purpose of this workshop is for participants to learn about what makes evaluation functions sustainable and how to lead and manage evaluation in different organisations and contexts to enhance its effectiveness and sustainability. Participants will learn about
- the factors that affect evaluation sustainability
- management approaches specific to evaluation and related activities
- organisational dynamics and evaluation
- how context, including political contexts, can affect the sustainability of evaluation
- the role of evaluation in a dynamic and fast changing world.
The workshop will provide an opportunity to explore the issues that affect evaluation systems in organisations; identify the components of sustainable evaluation systems and how these support the evaluation function to adapt to challenges as these emerge; and also work through approaches to the development of sustainable evaluation systems in particular organisations. Participants will gain insights through a combination of short presentations by the workshop facilitator and the shared practice wisdom of participants, resulting in a broader and deeper understanding of how evaluation systems either succeed or become unsustainable.
Penny Hawkins is an evaluation specialist with three decades of experience in international development evaluation and public policy across a wide range of sectors and organisations. Penny is the former Head of Evaluation at the UK Department for International Development (DFID). Over the past 20 years she has held evaluation leadership and management roles in government and philanthropic sectors, including at The Rockefeller Foundation in the USA and as Head of Evaluation for the Ministries of Foreign Affairs and Trade and Social Development, New Zealand. Penny has also held international evaluation leadership roles including as Chair of the OECD-DAC Network on Development Evaluation (2013-16) and as a former President and current Fellow of the AES. Penny is the founder and CEO of a woman-owned and operated international development evaluation consultancy with offices in New Zealand and Scotland. She works internationally with philanthropic, multi-lateral and private sector organisations to align strategy with impact and develop relevant and effective monitoring, evaluation and learning systems.
Learning and applying Systems Evaluation Theory (SET): From the classroom to the Harbour
presented by Lewis Atkinson, Brian Keogh, Ralph Renger FULL DAY | CATEGORY: B
Note: this workshop has an additional charge of $150.00 to cover charter cost of ferry
This workshop’s purpose is to build on lessons learned from the AES 2018 System Evaluation workshop to offer an engaging, self-reflecting, and self-evaluating opportunity to learn about systems thinking and SET. Since a series of EJA publications (Renger 2015; Renger, McPherson, Kontz-Bartels, & Becker, 2016; Renger, 2016; Renger, Foltysova, Ienuso, 2017; Renger, & Booze, 2017; Renger, Foltysova, Renger, & Booze, 2017; Renger, Keogh, Hawkins, Foltysova, & Souvannasacd, 2018) the application of Systems Evaluation Theory (SET) continues to grow and is becoming an important methodological tool to unpack wicked problems (Renger, Renger, Renger, Donaldson, & Hart, submitted).
The workshop objectives are as follows:
Classroom session (1st half day):
1. Using the Socratic teaching method and Kolb's experiential learning cycle participants will learn the limitations of program evaluation approaches in capturing context and evaluating complexity. More specifically through the use of case examples participants will learn about logic model limitations including:
a. the linear depiction of underlying program assumptions,
b. the consequences to impact evaluation when summarising key program elements devoid of context,
c. the limitations when trying to use multiple logic models to capture complexity (i.e., trying to adapt a tool to a problem for which it was never intended to solve)
d. the benefits of using the systems thinking concepts of elements, relationships & boundaries to guide adaptation to emergent and dynamic realities in complex environments
2. Using a family systems exercise (i.e., one that does not require substantive expertise and is therefore non-threatening and self-evaluative) participants will learn the SET basics.
3. Using the ever-controversial Murray Darling Basin Plan; insights arising from recent reviews and observed outcomes arising from its implementation; participants will learn how to apply the SET concepts of boundaries, perspectives, feedback loops, cascading failures, etc.
Practical session (2nd half day):
4. To apply the principles learned in the classroom setting in a real-world setting participants will be guided through Sydney Harbour (i.e., the Sydney Ferry System) to observe and experience firsthand the SET principles in action. This session will include group reflection this experience to process the experience, generate learnings and develop an action plan for application of the SET concepts to capture context and evaluate complexity in their own projects. As noted above, this is customised tour so workshop participants can see the harbour from a new, systems, perspective.
Lewis Atkinson is certified in the systems thinking approach® to strategic management. He is Global Partner with the Haines Centre for Strategic Management Ltd and has many government, private and non-profit clients in Australia and internationally. He is also a member of the Ian Potter Foundation Evaluation pool. During a long career at Meat and Livestock Australia, he held several executive positions over 16 years, most recently as Manager Knowledge & Program Evaluation where he led the evaluation technical working group on behalf of the Council of Rural Research and Development Corporations Chairs (CRRDCC) to report the collective impact of Rural R&D at multiple levels (project, program, industry sector) across 15 different agricultural industry sector organisations. He is also a senior associate with Impact Innovation which is specialist innovation and technology commercialisation consultancy providing Innovation and commercialisation advisory support to businesses as part of the government’s My Innovation Advisory service, assisting clients across a range of industries and sectors with technology, commercialisation including commercialisation strategy, licensee engagement and start-up company formation, facilitating workshop and training activities and provide skills development and collaboration training for Queensland businesses to enhance collaboration as part of the Innovate Queensland program.
Brian Keogh has over thirty years’ experience in management and consulting roles working across government, commercial companies and not for profits. With Julieanne Campbell he established Cobalt59. This company works across evaluations and audits, through to strategy creation and business development. In the evaluation sphere he has evaluated most river systems in NSW. He helped established the evaluation process for the Sydney Catchment and was a leader of the Sydney Catchment audit team. He advised in the development of the evaluation framework for the Murray Darling Basin Plan. In 2018 he worked with Ralph Renger is developing Systems Evaluation Theory through the evaluation of rural health medical technology in the Mid-West and Mountain West States of the US.
With over two decades of evaluation experience Ralph Renger specialises in solving evaluation problems. He is known for advancing evaluation methods, such as mainstreaming logic modeling through the three step, ATM approach (Renger & Titcomb, 2002), designing tools for when clients may have forgot about evaluation until then end (Renger, 2011), and adapting methods like root cause analysis for making program improvements(Coskun, Akande, & Renger, 2012; Renger & Foltysova, 2013). More recently he published the System Evaluation Theory (SET) (Renger, 2015) representing a breakthrough in developing evaluation methods that better capture the reality in which agency programs work.
Fundamentals of presentation technique
presented by Gerard Atkinson HALF DAY (AFTERNOON) | CATEGORY: A
Getting up and speaking to an audience is a necessary and inevitable part of being an evaluator. As evaluators we need to be able to use presentations to clearly communicate new and sometimes complex ideas to a range of audiences. In addition, we need to be compelling enough to get the attention of stakeholders so that the content is understood and ultimately used by stakeholders. Doing this requires the evaluator to be skilled at creating and delivering presentations.
This workshop is designed to give evaluators a grounding in the core skills of presentation, including a toolkit for developing and delivering effective presentations. Participants will work through a series of short modules and practical exercises on presentation techniques, culminating in the preparation and delivery of a short presentation. Participants will also receive a workbook with more detailed notes to support them once they have completed the workshop.
The modules will cover:
- How presentations fit into theories of evaluation
- The different types of presentations in evaluation
- Preparing for delivering presentations, including:
– topic and audience research
– structure and content development
– developing speaker notes and audience handouts (and why they’re not the same)
- Presentation delivery, including:
– body language
– vocal delivery
– anxiety management
– presenting live, presenting for podcasts/ webinars and presenting to camera
- Handling audience questions
The workshop is targeted at all evaluators who may be delivering presentations as part of their work or are interested in developing their skills to deliver presentations. It aligns with competencies 6 (Interpersonal Skills) and 7 (Evaluation Activities) of the Evaluators’ Professional Learning Competency Framework.
Gerard Atkinson is an evaluator, actor and opera singer who has performed to audiences in Australia and internationally. Prior to joining ARTD as a Manager he led the evaluation unit at the Australia Council for the Arts, forming the team and developing a strategy for monitoring and evaluation of its portfolio of programs. Gerard is passionate about storytelling and the communication of evaluation findings, and at AES 2018 he presented in three sessions including facilitating an interactive session “Evolving the Evaluation Deliverable” which was subsequently published on the AES blog.
Gerard has received training in advanced business presentation techniques at Southern Methodist University in Dallas, Texas, and has completed programs on customer engagement through the Disney Institute in Orlando, Florida. Gerard also has an MBA in Business Analytics, an MA in Arts Management, and a BSc in Physics.