|[ ASCILITE ]
[ 2004 Proceedings Contents ] |
The aim of this paper is to present a critical analysis of the satisfaction of first year Information Systems students with asynchronous e-learning systems at the University of Tasmania and the University of Adelaide. It measures whether factors such as content, learner interface, feedback and assessment, personalisation, learning community and access affect students' satisfaction with asynchronous e-learning systems. This paper also compares whether there is any difference in satisfaction with the different systems between the two universities. It found that there is no difference in satisfaction between the different asynchronous e-learning systems used in the two universities but found that factors such as content, personalised feedback, interface and learning community were significantly related to students' satisfaction with asynchronous e-learning systems. These findings may influence how lecturers and designers utilise asynchronous e-learning systems for first year University students. Well structured quality content that is presented in an easy to understand format along with receiving personalised feedback on their progress are important elements of effective e-learning. Coupled with these factors is the need to learn in a community and the ability to select resources from the asynchronous e-learning system to suit their personal needs.
The University sector is now embracing e-learning not only for economic reasons but also for the flexibility and opportunities it provides (DEST, 2002). There exist a number of e-learning systems that are commonly employed within Universities; these include WebCT, BlackBoard and numerous in-house systems. Within these e-learning systems there are a number of different tools, including discussion boards, email, slideshows, streaming audio and video (Ashley 2003). Communication tools form an integral part of an e-learning system as they provide a means for communication and learning to occur between students as well as between students and lecturer. Feedback is integral to the learning and assessment process.
E-learning systems can either operate in an asynchronous or synchronous mode. Essentially the asynchronous mode is where communication, collaboration and learning can occur in "different time - different place" manner and where users can select when they wish to communicate (Ashley, 2003). This may be very useful where lecturers need to manage large numbers of students. University students face a number of issues (Schrum & Hong, 2002), such as balancing the competing demands of work, family and study. The ability to access and communicate in asynchronous mode can meet many of their needs of a "just for me" learning environment (Hisham, 2004; Rogers, 2000). The synchronous mode allows people to interact with each other in a "same time - different place" manner.
As with traditional paper/print based learning there are a number of factors to be considered in developing asynchronous learning systems. These include learning styles, communication and feedback, and access. Indeed e-learning is often being used in a blended manner (Valiathan, 2002; Wenting et al., 2000) to complement traditional methods of delivery. Successful systems often require a multifaceted approach to satisfy students' requirements.
The need to measure satisfaction is critical in order to evaluate whether the systems that are currently being employed actually meet the users' needs. Whilst attention has been paid to designing and evaluating e-learning systems (Lambert, 2003; Trinidad, 2004) there has been limited research into University students' satisfaction with asynchronous e-learning systems in Australia. There exist a number of instruments for measuring user satisfaction with Information and Communication Systems. Wang (2003) undertook an international study, which specifically examined students' satisfaction with asynchronous e-learning systems. He sought to examine whether five factors - content, learner interface, learning performance (feedback and assessment), personalisation and learning community influence user satisfaction. He found that all of these factors apart from learning performance were related to student satisfaction. Content and learner interface remains key factors for determining satisfaction. Swan (2001) reported about the relationship of course structure to students' satisfaction. It is important to have a suitable learner interface that meets their needs when considering a learning system. Material must be presented in a way that is user friendly. Sharing and learning from others, whether it be other students or the lecturer in a learning community is important. Much has been made in the literature of the constructivist nature of learning and the need for student to interact in an enriching manner (Salmon, 2002b). Constructivism is where students learn by constructing new concepts in an active manner (Gery, 2002).
Whilst Wang (2003) presented five variables that could influence learner satisfaction, he did not specifically include access, which is of considerable importance to students (Johnson & Rupert 2002). Wang's instrument was based on previous research undertaken by Doll & Torkzadek (1988). This study seeks to build on the research undertaken by Wang (2003). Additionally, access will be considered, in this study, as a possible sixth factor that could affect student satisfaction.
Given the importance of e-learning to Australian universities and the value of understanding the manner in which it is being used, we are faced with the need to measure whether students are satisfied with e-learning systems. This study aims to evaluate six factors that may affect the level of satisfaction with asynchronous e-learning systems for first year Information Systems students within the Commerce faculty at the University of Tasmania and the University of Adelaide, as well as any differences in student satisfaction between the different e-learning systems used in the two universities.
A number of hypotheses were considered initially.
|H01:||Students' satisfaction with asynchronous e-learning systems is not related to the content provided through the e-learning systems.|
|H02:||Students' satisfaction with asynchronous e-learning systems is not related to the learner interface displayed by the e-learning Systems.|
|H03:||Students' satisfaction with asynchronous e-learning systems is not related to the feedback and assessment provided through the e-learning Systems.|
|H04:||Students' satisfaction with asynchronous e-learning systems is not related to the personalisation option provided through the e-learning Systems.|
|H05:||Students' satisfaction with asynchronous e-learning systems is not related to the learning community provided by the e-learning System.|
|H06:||Students' satisfaction with asynchronous e-learning systems is not related to gaining access to the e-learning system.|
|H07:||There is no difference in students' satisfaction with asynchronous e-learning systems between the two universities.|
The researchers also included this seventh hypothesis to investigate if there is a difference in the students' satisfaction level between the two universities participating in this research.
Most students had attended university for less than one year (80%), were aged 18-25 years (80%), were currently using e-learning in their first year units (96%) and considered themselves to be intermediate users (71%).
Responses were obtained in two waves or groups, early and late responders, which permitted an assessment of any systematic differences associated with time of response (early or late). The Mann-Whitney Test (Table 1) considered University, Study Duration, Age and Computer Experience. Given the P-values were 0.075, 0.324, 0.537, 0.317 respectively, and were all greater than 0.05 we can reasonably conclude that these variables are not affected by time of response, which gives us some confidence that non-response is not an important source of bias
|University||Study duration||Age||Comp. exp. level|
|Asymp. Sig. (2-tailed)||.075||.324||.537||.317|
Given there were 137 respondents, this enabled a factor analysis to be performed (Foster, 2001). A factor analysis is an exploratory technique that reduces a large number of variables to the underlying component factors (Coakes, 2003). An extraction process using SPSS was undertaken, with Principal Component Analysis using Varimax Rotation Method with Kaiser Normalization. The rotation converged in 6 iterations with exclusion of single-item factors. The 28 items were reduced to 26 items (Table 3), which were then grouped into component factors identified as personalised feedback, learning community, learner interface, content and access. Refer to Appendix A for the specific questions in each factor.
Cronbach's Alpha was used to test for reliability of the items. Reliability is used to ensure the consistency of the results for the various items being tested within each component (Foster, 2001). The values are reported in Table 2. As the values were greater than 0.7, the data could be retained and undergo further analysis. It should be noted that the factors for Personalisation and Feedback were shown by the Factor Analysis to be contained within the same component, Personalised Feedback. Thus, the hypotheses H03 and H04 were combined to form a new hypothesis:
|H03:||Students' satisfaction with asynchronous e-learning systems is not related to the personalised feedback provided through the e-learning systems.|
|Variables||N of items||Cronbach's Alpha|
Normality was not established and indeed as the variables were measured on an ordinal scale non-parametric testing was necessary (Foster, 2001; p. 7). Non-parametric testing involving a bivarate correlation using Spearman's Rho correlation was employed to test hypotheses H01 to H06 (excluding H04). The values are reported below in Table 4.
From Table 4, the P-values for association of satisfaction with (respectively) content, learner interface, personalised feedback, learning community and access were 0.00, 0.00, 0.00, 0.00 and 0.016. All P-values were less than 0.05 so we can reject all the hypotheses. We conclude there is a relation between each of the factors (content, learner interface, personalised feedback, learning community and access) and the criterion of students' satisfaction with asynchronous e-learning systems. Clearly, personalised feedback is of major concern to students, followed by content, learner interface and learning community. The means for achieving personalised feedback with existing e-learning systems is limited, as the assessment criteria cannot be linked across tasks. Given the importance of reflection in the learning process (Stansfield et al., 2004) it is important to have systems that provide this level of feedback.
|N||Direction||Spearman's rho correlation||Result|
|H04||Note: Personalisation and Feedback combined as a result of the factor analysis|
|** Correlation is significant at the 0.01 level (2-tailed)|
* Correlation is significant at the 0.05 level (2-tailed)
We employed the Mann-Whitney test to examine the seventh hypothesis on the difference in students' satisfaction between the two universities. As the value, 0.466 was greater than 0.05 we cannot reject H07. The Null Hypothesis, H07, was accepted and hence we can say there is no evidence of difference in satisfaction between the asynchronous e-learning systems in the University of Tasmania and the University of Adelaide. It would appear that the issue is, not what propriety asynchronous e-learning systems are employed, but how it is employed. The issues of having a well-designed interface, the capacity for personalised feedback and appropriate content are more important than what system is employed.
|Asymp. Sig. (2-tailed)||.466|
Students surveyed in this research had spent less than one year at university, were aged 18-25 years and hence relatively new to a University learning environment. Students require a supportive learning community in order to be satisfied with their asynchronous e-learning environment. There exists a need, supported by the research findings, to use tools such as discussion boards and virtual groups in order to establish a learning community. Asynchronous e-learning systems need to provide a suitable interface for students and as well as provide students with the opportunity to easily access the content at a time and place that meets their individual learning needs. Whilst access had a low correlation this might be taken to mean that many issues with access and bandwidth have been resolved, rather than access' lack of significance. From this research there appears little evidence to suggest one propriety asynchronous e-learning system is better than another.
The importance of asynchronous e-learning systems to universities is well established. The question is not if, but how asynchronous e-learning systems should be used. The findings are significant, in that they provide lecturers with clear directions on what needs to be undertaken when, in order to satisfy students' needs. On the basis of this research, it is clear that further work needs to be undertaken to develop asynchronous e-learning systems that provide for personalised feedback through automation tools or rubrics that cater for the assessment criteria across tasks.
Campton, P. (2003). E-learning-'Trick or treat'? Using technology for teaching and learning in a tertiary setting. Proceedings 20th ASCILITE Conference. Adelaide, December 7-10. http://www.ascilite.org.au/conferences/adelaide03/docs/pdf/585.pdf
Coakes, S. & Steed, L. (2003). SPSS: Analysis without Anguish. Queensland: Wiley.
Department of Education, Science and Training (2002). Universities Online: A survey of online education and services in Australia. Occasional Paper Series, J. S. McMillan Printing Group, Canberra. http://www.dest.gov.au/highered/occpaper/02a/02_a.pdf
Doll, W.J. & Torkzadeh, G. (1988). The measurement of end-user computing satisfaction. MIS Quarterly, 12(2), 259-274.
Friedman, T.L (1999). Foreign Affairs: Next, It's E-ducation. The New York Times, 17 November 1999. Late Edition, Section A, p. 25:col.5.
Foster, J. (2001). Data analysis using SPSS for Windows version 8 to 10: A beginner's guide. London: Sage Publications.
Gery, G. (2002). Task support, reference, instruction, or collaboration? Factors in determining electronic learning and support options. Technical Communication, 49(4), 420-427.
Heppell, S. (2003). The Future of E-Learning. Connected, 6. [viewed 23 July 2003, verified 17 Oct 2004] http://www.ngflscotland.gov.uk/connected/connected6/E-Learning/FutureELearning.asp
Hisham, N., (2004). Measuring First Year Information Systems Students' Satisfaction with Asynchronous E-learning Systems in Three Universities within Australia. Unpublished Masters Thesis, University of Tasmania, Hobart.
Jackson, R.H. (2001). Defining e-Learning - Different Shades of "Online". Web Based Learning Resources Library. http://www.outreach.utk.edu/weblearning/ [viewed 2 Jul 2004, not found 17 Oct 2004]
Johnson, A. & Ruppert, S. (2002). An evaluation of accessibility in online learning management. Library Hi Tech, 20(4), 441-451.
Lambert, S. (2003). Learning Design at the University of Wollongong. Proceedings 20th ASCILITE Conference. Adelaide, 7-10 December. http://www.ascilite.org.au/conferences/adelaide03/docs/pdf/643.pdf
Parker, M. B (2002). Three Pillars of Technology-enhanced e-Learning. Proceedings 4th Annual World Wide Web Conference, University of Stellenbosch. 4-6 September. [viewed 17 July 2004, verified 17 Oct 2004] http://general.rau.ac.za/infosci/www2002/Full_Papers/Parker%20M/Parker_TechEnhE-learning.pdf
Rosenberg, M.J. (2001). E-learning: Strategies for delivering knowledge in the digital age. McGraw-Hill, USA.
Rogers, J. (2000). Communities in practice: A framework for fostering coherence in virtual communities. Educational Technology and Society, 3(2). http://ifets.ieee.org/periodical/vol_3_2000/e01.html
Salmon, G. (2002a). Computer mediated conferencing for management learning at the Open University. Management Learning, 31(4), 491.
Salmon, G. (2002b). Learning from the future. Proceedings of the Annual Conference of the Australian Computers in Education Conference (ACEC). Hobart, 11-13 July. [verified 17 Oct 2004] http://www.pa.ash.org.au/acec2002/uploads/documents/store/conferences/conf_185_gilly_salmon.pdf
Schrum, L. & Hong, S. (2002). Dimensions and strategies for online success: Voices from experienced educators. Journal of Asynchronous Learning Networks, 6(1), 57-67. http://www.sloan-c.org/publications/jaln/v6n1/v6n1_schrum.asp
Soon, K.H., Sook, K.I., Jung, C.W. & Im, K.M. (2000). The effects of Internet-based distance learning in nursing. Computing in Nursing, 18(1), 19-25.
Stager, G. (2004). The Case for Computing. [viewed 15 July 2004, verified 17 Oct 2004] http://stager.org/articles/thecaseforcomputing.html
Stansfield, M., McLellan, E. & Connolly, T. (2004). Enhancing student performance in online learning and traditional face-to-face class delivery. Journal of Information Technology Education, 3, 173-188.
Swan, K. (2001). Virtual interaction: Design factors affecting student satisfaction and perceived learning in asynchronous online courses. Distance Education, 22(2), 306-331.
Thurmond, V., Wambach, K., Connors, H.R. & Frey, B.B. (2002). Evaluation of student satisfaction: Determining the impact of a web-based environment by controlling for student characteristics. The American Journal of Distance Education, 16, 169-190.
Trinidad, S. (2004). Helping teachers successfully use e-learning environments. Proceedings Australian Computers in Education Conference (ACEC). Adelaide, 5-8 July. http://www.acec2004.info/uploads/documents/store/conferences/conf_P_78_finalacec2004-oles.doc
Valiathan, P. (2002). Blended learning models. Learning Circuits. ASTD. [viewed 24 April 2004, verified 17 Oct 2004] http://learningcircuits.org/2002/aug2002/valiathan.html
Warner, D. (2004). Assignment management: A bottom-up approach. Proceeding Australian Computers in Education Conference (ACEC). Adelaide, 5-8 July. [verified 17 Oct 2004] http://www.acec2004.info/uploads/documents/store/conferences/conf_P_43_assignmentmanagement2004-02-19.doc
Wentling, T.L., Waight, C., Gallaher, J., La Fleur, J., Wang, C. & Kanfer, A. (2000). E-learning - A review of literature. http://learning.ncsa.uiuc.edu/papers/elearnlit.pdf [11 Mar 2004, verified 17 Oct 2004].
Appendix A: Questionnaire
Demographic data (not listed)
Content (factor1)101. The e-learning system provides content that exactly fits my needs.
102. The e-learning system provides useful content.
103. The e-learning system provides sufficient content.
104. The e-learning system provides up-to-date content.
Learner interface (factor 2)201. The e-learning system is easy to use.
202. The e-learning system makes it easy for me to find content I need.
203. The content provided by the e-learning system is stable.
204. The e-learning system is user-friendly.
205. The operation of the e-learning system is stable.
Feedback and assessment (factor 3)301. The e-learning system responds to my requests fast enough.
302. The e-learning system makes it easy for me to evaluate my learning performance.
303. The testing methods provided by the e-learning system are easy to understand.
304. The testing methods provided by the e-learning system are fair.
305. The e-learning system provides secure testing environments.
306. The e-learning system provides testing results promptly.
Personalization (factor 4)401. The e-learning system enables me to control my learning progress.
402. The e-learning system enables me to learn the content I need.
403. The e-learning system enables me to choose what I want to learn.
404. The e-learning system records my learning progress.
405. The e-learning system records my learning performance.
406. The e-learning system provides the personalized learning support.
Learning community (factor 5)501. The e-learning system makes it easy for me to discuss questions with my lecturers and/or tutors.
502. The e-learning system makes it easy for me to discuss questions with other students.
503. The e-learning system makes it easy for me to share what I learn with the learning community.
504. The e-learning system makes it easy for me to access the shared content from the learning community.
Access (factor 6)601. The e-learning system is difficult to access.
602. The speed of access to the e-learning system is slow when accessed from the university.
603. The speed of access to the e-learning system is slow when accessed from home.
Overall satisfaction701. As a whole, I am satisfied with the e-learning system.
702. As a whole, the e-learning system is successful.
703. Using e-learning systems to enhance my educational experience is valuable.
|Authors: Nadira Hisham, School of Information Systems, University of Tasmania, Australia. email@example.com
Paul Campton, School of Information Systems, University of Tasmania, Australia. Paul.Campton@utas.edu.au
Des FitzGerald, School of Mathematics & Physics, University of Tasmania, Australia. D.FitzGerald@utas.edu.au
Please cite as: Hisham, N. Campton, P. & FitzGerald, F. (2004). A tale of two cities: A study on the satisfaction of asynchronous e-learning systems in two Australian universities. In R. Atkinson, C. McBeath, D. Jonas-Dwyer & R. Phillips (Eds), Beyond the comfort zone: Proceedings of the 21st ASCILITE Conference (pp. 395-402). Perth, 5-8 December. http://www.ascilite.org.au/conferences/perth04/procs/hisham.html
Print version errata: Tables 3 to 6 in the print version should be numbered Tables 2 to 5.
© 2004 Nadira Hisham, Paul Campton & Des FitzGerald
The authors assign to ASCILITE and educational non-profit institutions a non-exclusive licence to use this document for personal use and in courses of instruction provided that the article is used in full and this copyright statement is reproduced. The authors also grant a non-exclusive licence to ASCILITE to publish this document on the ASCILITE web site (including any mirror or archival sites that may be developed) and in printed form within the ASCILITE 2004 Conference Proceedings. Any other usage is prohibited without the express permission of the authors.