Developing Self-Monitoring Intelligent Tutoring Systems For Accounting Education: Some Preliminary Results

L. Richard Ye, Ph.D.

Department of Accounting and Management Information Systems

College of Business Administration and Economics

California State University, Northridge

Northridge, CA, 91330-8245, USA
rye@csun.edu

Glen L. Gray, Ph.D., CPA

Department of Accounting and Management Information Systems

College of Business Administration and Economics

California State University, Northridge

Northridge, CA, 91330-8245, USA

glen.gray@csun.edu

Abstract

In response to the growing real-world significance of expert systems and artificial intelligence, many business schools include these topics in their curricula. Learning about expert systems is not just learning how to use a new software package. It is learning about a different role for computers, namely, aiding in decision-making. This makes expert systems particularly challenging for both students and instructors.

This paper describes the development and testing of a computer-based system to teach expert system concepts. The system integrates traditional CAI with artificial intelligence, hypertext, and information trace capabilities to create "Self-Monitoring Intelligent Tutoring Systems" (SMITS). SMITS can be used to: (1) provide an individualized, multimedia tutoring environment to supplement classroom instruction; (2) assess and help improve students' skills in critical thinking and judgment; (3) aggregate information across individuals to evaluate the educational needs of particular groups of students; and (4) facilitate research in information search strategies and decision-making processes.

  1. Introduction

In response to the growing real-world significance of expert systems and artificial intelligence, many business schools are including these topics in their curricula. One recent survey by Teer and Teer [28] found that 85% of business schools either offered or planned to offer instruction about expert systems.

In the accounting profession, expert systems has consistently appeared in the annual list of technology expected to have the most impact on CPAs [2]. In addition, several education-related publications have underscored the importance of expert systems within the accounting curriculum (e.g., [1, 3]).

Learning about expert systems is not just learning how to use a new software package. It is learning about a different role for computers, namely, aiding in decision-making. Additionally, developing expert systems helps students learn how to research and assemble information that can be translated into a decision-making process. As such, teaching expert system development is particularly difficult. Instructors must teach decision-making knowledge extraction and documentation as well as expert system software.

Exacerbating the teaching challenge is the wide diversity of computer interest and experiences of the students. Some students own computers and have years of hands-on computer experience. On the other hand, some have very little general computer experience.

Unfortunately accounting information system (AIS) textbooks provide little help for either the instructors or the students about expert systems. According to a study by White [33], current AIS textbooks are inadequate and misdirected when compared to recommendations from accounting educators and practitioners regarding expert system coverage.

This paper reports the development and testing of a enhanced computer-aided-instruction (CAI) system called Self-Monitoring Intelligent Tutoring Systems (SMITS), which integrates traditional CAI with artificial intelligence (AI), hypertext, and information tracing techniques to facilitate the teaching of expert system concepts and development. The remainder of this paper is divided into four sections. Section 2 reviews the literature on expert systems instructions. Section 3 provides the conceptual foundation of the SMITS under development. Section 4 reports the results of developing and testing a SMITS prototype, and Section 5 discusses planned future system developments and research activities.

  1. Background

The theory of artificial intelligence (AI) and its application, expert systems, have been playing many important roles in business as well as non-business organizations [4, 10, 14, 17, 18, 23, 30, 31, 32, 34]. Reflecting this real-world significance, expert systems are receiving coverage in a significant number of business schools. For example, Teer and Teer [28] reported:

Specifically in the accounting profession, expert systems and AI have consistently appeared in the annual list of technology that would have the most impact on CPAs, which is published by the Advanced Technology Committee of the AICPA [2]. Several education-related publications since the then Big 8's major accounting firms' White Paper [3] have underscored the need to cover expert systems in the accounting curriculum (e.g., see [1]). A recent paper by Brown, Balwin-Morgan and Sangster [9] summarizes 31 papers supporting the integration of expert systems into accounting education.

Learning about expert systems involves not only an understanding of a computer-based information system's role in managerial decision-making, but also an understanding of the decision-making process. Bouwman and Knox-Quinn [7] concluded that developing expert systems helps fulfill the learning to learn objectives advocated by the Accounting Education Change Commission [1]. Specifically, Bouwman and Knox-Quinn [7] stated:

Evidence suggests that knowledge engineering [developing an expert system] provides an environment in which students learn to (1) search, (2) read with a problem solving frame of mind, (3) communicate logically, (4) organize and structure accounting knowledge, and (5) logically problem solve, while learning accounting content material.

Teaching expert system development is particularly difficult. Students generally have little or no experience in extracting and documenting decision-making processes. Therefore, instructors must teach problem-solving knowledge extraction and documentation and expert system software. In contrast, teaching students how to use word-processing or spreadsheet programs is generally easier since the students already have the skills to write papers or manipulate numbers. Under these conditions, the tasks are familiar so instructors can concentrate on the software.

Adding to the challenge is the fact that students enrolled in the same class can have a wide range of background experience in computer use and applications development. On the one hand, it is hard to keep computer-literate students challenged. They are usually beyond the stage of operational knowledge of a computer and are more interested in developing new applications. On the other hand, we have incoming college students who find it difficult to complete even relatively simple tasks on the computer.

The instructor must take all of these factors into consideration when determining how much of limited classroom time to allocate to various aspects of expert systems development. Too much coverage will be boring to the more experienced students. Too little coverage will leave the less experienced students frustrated and may reinforce a computer avoidance philosophy.

One way to enhance the classroom coverage is the use of computer labs where students can use supplemental materials and learn at their own pace. The remainder of this paper describes the development and testing of a computer-based tutoring system called SMITS that students can use in an open student lab (and/or at home) to learn about expert system concepts and development.

  1. SMITS: A Conceptual Framework

Advanced information technologies are now making it possible to greatly expand the scope and effectiveness of traditional computer-aided instruction (CAI), which is evolving from a passive tutor toward an effective tool for helping students develop critical thinking and analytic capabilities.

This paper reports on the development and testing of a Self-Monitoring Intelligent Tutoring Systems (SMITS) that integrates traditional CAI with artificial intelligence (AI), hypertext, and information tracing techniques. AI offers the ability to capture the expert knowledge of a subject domain, to evaluate a student's approximation of that knowledge, and to implement tutoring strategies that reduce the difference between expert and student performance [21, 25]. Hypertext greatly improves instructional flexibility and encourages the student to take a more active role in the learning process [19, 27]. Information tracing helps optimize both pedagogical effectiveness and system performance.

Together, we believe these enhancements to CAI provide a synergistic system that can be used to: (1) provide educators with an individualized tutoring environment to supplement classroom instruction, (2) assess and improve students' skills in critical thinking and judgment, (3) aggregate information across individuals to evaluate the educational needs of particular groups of students, and (4) facilitate research in information search strategies and decision-making processes.

  1. Limitations of Traditional CAI Applications

CAI refers to a class of computer programs that present instructional material in a carefully designed sequence [22]. Typically, the student is guided through the material by a series of questions. If the student provides the correct response, the computer displays the next frame of material and corresponding questions. If the response is incorrect, the previous material and question are re-displayed. This process is repeated until the entire material has been presented and the bank of questions have been correctly answered.

Traditional CAI applications encourage a passive approach to learning, characterized by an absence of student initiative [20]. As a result, the student's interaction with the tutorial program is limited to highly structured, multiple choice formats. This in turn further reinforces passivity in learning orientation. Other drawbacks of the traditional approach include: an inability to respond to unanticipated answers, potential misinterpretation of student errors leading to inappropriate remedial information, and an inability to match the difficulty of learning materials and questions to the student's capabilities and level of sophistication [20, 22].

  1. Enhancing CAI with AI, Hypertext, and Information Tracing Technologies
  2. Artificial Intelligence

The introduction of artificial intelligence technology to CAI has led to research and development in an area known as Intelligent Tutoring Systems (ITS). ITS applies theories of learning to infer a knowledge structure that depicts the student's current understanding of the subject matter, and then use that knowledge to adapt instructions to the student's particular needs [8].

Much as expert system shells distinguish domain knowledge from control knowledge, in ITS, knowledge of the domain (including conceptual, procedural, and heuristic knowledge), knowledge of the student, and knowledge of how to teach are differentiated [16, 21, 24]. By explicitly representing both the concepts to be taught and how a student might learn those concepts, ITS has the ability to conduct a more comprehensive diagnosis of the student's current state of knowledge.

ITS uses explicit computer-implemented models of instruction to determine how the program responds to the student [11]. The instruction model usually consists of a large set of fine-grained inference rules. Each rule consists of actions together with conditions under which these actions should occur. When an ITS is running, these rules are not invoked in any fixed order. Instead, at each point in time, the program searches for the rule most likely to contribute useful information to the current situation.

An ITS engages the student in a reasoning task, and compares the student's actions with the set of actions that can be generated by the system's performance model (knowledge of the domain). This comparison lets the ITS identify when the student does something either wrong or useless in pursuit of the current task (knowledge of the student). The system then uses a model of teaching strategies to determine what (if any) advice is to be offered [15].

  1. Hypertext

While traditional CAI guides a student through a fixed tutorial path, the use of hypertext allows the student to chart his or her own navigational paths through the learning material relatively independent of the main tutorial path. Advanced students could quickly move through the high-level material, whereas less skilled students could descend through the hypertext structure to access the support information if necessary. The inclusion of hypertext responds directly to criticisms that traditional CAI encourages passive learning. By accommodating a variety of reasoning and problem-solving styles, the resulting system can substantially improve the fit between the student's and the system's level of sophistication and knowledge [26].

  1. Information Tracing

The basic objective of information tracing is to capture a list of the specific information used by a student to complete a learning task or make a decision. Within the context of CAI, the computer can collect other parameters pertaining to the student's learning behavior. For example, the amount of time a student spends on a learning task might be indicative of the relative difficulty of that task. The student's navigational path through the learning material might also contain important data: does the student engage in a systematic search following the natural sequence (i.e., chapter 1, then chapter 2, etc.) of the material and, thereby, go through a lot of background information not directly relevant to the learning objective? Or does the student use a directed search strategy and jump to specific chunks of information in an attempt to quickly reach a conclusion [5, 6]?

Figure 1. A sample screen on identifying important decision variables

Past implementations of information tracing have been time- and labor-intensive, rendering it impractical for widespread use. Newer technologies now offer an alternative to the traditional information tracing methods. Incorporating AI techniques, programs could be written to collect and analyze information tracing data [13]. Information tracing adds the monitoring component to ITS, and coupling AI to the information trace adds the self-monitoring component to ITS.

Both the instructor and the student can benefit from such capabilities. From the instructor's perspective, information trace data could be analyzed to facilitate organization of the tutorial material and planning of better instructional strategies. From the student's perspective, information tracing provides the opportunity for individualized assessment and diagnosis. The trace data could be examined to determine which aspects of the tutorial were most challenging, and an individualized training program could subsequently be prescribed for that student.

  1. Developing and Testing a SMITS

A project is currently underway to develop and incrementally test a SMITS. In the absence of an integrated tool that supports both a graphical user interface with hypertext capabilities and an expert system engine, we are developing the SMITS prototype using Asymetrix Multimedia Toolbook CBT Edition. This tool provides superior interface development and student records management capabilities, but lacks a built-in knowledge representation and reasoning environment. However, Toolbook's powerful OpenScript language does allow hand coding of a reusable inference engine to perform the reasoning task.

  1. Tutorial Domain

The SMITS prototype was designed to teach the concepts and development of expert systems (ES). The tutorial materials were divided into two parts. Part I provides narrative information with a hypertext structure. It defines the ES technology, explains its rationale, describes its development method, and tests students on their understanding of the concept. After the student completes this part, an expert system provides the student with a customized report based on the answers that the student gave to the pop-up quizzes that appeared throughout the tutorial materials.

Figure 2. A sample screen on using a decision table to help develop problem-solving rules

Part II is an interactive exercise. It contains two decision-making problems and it guides the student in developing knowledge base rules to solve these problems. The first problem is used in an self-paced tour, in which the system walks the student through the process of identifying the decision variables involved in the problem (see Figure 1 for a sample screen), and of developing heuristic problem-solving rules through the use of a decision table (see Figure 2). The student then works on the second problem independently to complete the same tasks of variable identification and rule development. Part II is further linked to Part I through hypertext so the student has ready access to the basic concepts and definitions when needed. When in use, the system keeps a log file for each student that records his/her mouse clicks on individual screen objects, and the elapsed time between mouse clicks for subsequent analysis of student usage behavior.

  1. Design and Implementation Issues

To collect usage data from the field to aid in the design of the system's pedagogical strategies, we have taken an evolutionary approach to implementing and testing the SMITS prototype. During the last two academic terms, we have asked business students enrolled in an introductory information systems course to use the tutoring system. Students were given a term project of developing a small, but real, expert system, and were instructed to first work with the tutoring system to obtain knowledge and skills needed for completing the development project. A preliminary analysis of the students' usage behavior has raised a number of design and implementation issues.

Figure 3. A sample screen of the variable identification task from the interactive exercise
(if the user clicks on some problem text corresponding to a correct variable, it will change to a different color and a Bold typeface)

First, the manner and degree which students would used the tutoring system seemed largely a function of the incentives we provided. To encourage its use, we initially allowed students maximum freedom in deciding the amount of effort they put into the learning process without being judged by predetermined behavior criteria. As a result, when completing the interactive exercise, students did not always try to complete every step, nor did they attempt to check the accuracy or validity of their answers, although the system had the facility to provide such verifications. We responded to this problem by reprogramming the user-system dialogue so that the student must complete each step of the exercise before proceeding to the next step. However, because students used the system in an open lab, not in an individually supervised and controlled setting, enforcing a specific usage pattern would not eliminate other undesirable behaviors, such as copying. Alternative incentive mechanisms need to be used to resolve this issue. One solution might be to administer the first-time use of the system in a supervised lab environment, where students must complete the tutorial material independently.

Second, we will need to decide what kinds and levels of help facilities the tutoring system should provide. Whether answering a quiz question or completing an interactive exercise, some students will inevitably encounter problems and find it difficult to continue. The system must be able to intervene and offer help. At minimum, we found we must make the following design decisions: when should the system provide help, and how easily should help be accessible? For example, during the variable identification stage of our interactive exercise (see Figure 3), the system could offer to reveal the answers once a student has had a certain number of "misses." What threshold number to use, however, remains an unanswered question. During our pilot tests, we set the threshold at ten consecutive misses before showing a "Reveal" button. Surprisingly, upon being asked to confirm their desire to have the system reveal the answer, most students chose to withdraw their requests and continued to find all the answers on their own.

Developing decision table entries provides another example. Because every row of a decision table corresponds to a knowledge base rule, as soon as the student completes a row, the system will be able to determine whether or not the underlying rule is valid, contains any redundant condition, or coincides with another row of the same decision table. Figure 4 shows a sample screen in which an incorrect entry has been made. As soon as this happens, the system will present a "Hint" box accompanied by a beep sound. Here again we need to decide how easily should help be made available. It is an issue of "granularity" [12]. If the system's advice is too general and crude, it may leave the student with no clear direction on how to proceed. On the other hand, we run the risk of making practically all the decisions for the student if advice is offered at the first sign of an error: when that happens, learning is less likely to take place.

Figure 4. A sample screen of rule development

Third, judging from data collected on how much time the students spent on each part of the tutorial material, it appeared that they did not always spend sufficient amount of time on material that we considered most important in the subject area. If one characteristic of an effective teacher is the ability to help students focus on more important matters, an intelligent tutoring system ought to also play a more active role in directing or redirecting students' attention toward such matters. The implication is that the system should perhaps monitor the elapsed time more closely and provide context-sensitive guidance.

Fourth, given the amount of trace data we can potentially collect and the amount of processing required to analyze the data, we need to determine what kind of information will be worth tracking. For example, a student may, in the course of using the system, click on various objects with a mouse button. All of these button clicks can be trapped and logged, but some of them will be more informative than others in revealing the student's learning process. It appears that we cannot and should not try to answer the question a priori. Rather we should initially try to collect as much data as possible, and then determine the cost/effectiveness of different kinds of data.

  1. Current Status

We are currently conducting a field experiment to evaluate the SMITS prototype. Two groups of students are participating in this experiment. One group was instructed to use the system before completing the ES term project (the treatment group). The students were told they would receive credits for using the system. They were also told that their understanding of the tutorial material presented by the system would be further tested on the final examination. The second group (the control group) completing a similar project did not use the program. Instead they received instructions only in class and from handouts that covered the same content material as provided by the system. At the end of the semester, we will evaluate and compare projects submitted by these two groups of students. We will also evaluate and compare the final examination performance of these two groups.

Acknowledgments

The research reported here is funded by a grant from the National Center for Automated Information Research.

References

  1. Accounting Education Change Commission (1990) Objectives of education for accountants: position statement number one, Issues in Accounting Education, 5(2), 307-312.
  2. American Institute of Certified Public Accountant (1996) AICPA Information Technology Section Announces Top Technologies for 1996, AICPA InfoTech Update, Winter 1-3.
  3. Arthur Andersen and Co., Arthur Young, Coopers and Lybrand, Deloitte Haskins and Sells, Ernst and Whinney, Peat Marwick Main and Co., Price Waterhouse, and Touche Ross (1989) Perspectives on Education: Capabilities for Success in the Accounting Profession.
  4. Beerel, A. (1993) Expert Systems in Business: Real World Applications, New York: Ellis Horwood.
  5. Biggs, S.G. and Mock, T.J. (1983) An investigation of auditor decision processes in evaluation of internal controls and audit scope decisions, Journal of Accounting Research, 234-255.
  6. Bouwman, M.J. (1982) Expert versus novice decision-making in accounting: a process analysis, in Unson, G.R. and Braunstein, D.P., (ed.), Decision Making: An Interdisciplinary Inquiry, Kent Publishing Company.
  7. Bouwman, Marinus and Carol Knox-Quinn (1994) Student knowledge engineering in accounting: a case study in learning. In Proceedings of the Annual Meeting of the Western American Accounting Association Portland, OR.
  8. Brokken, F.B. and Been, P.H. (1993) Student Modeling in Intelligent Tutoring Systems: Acquisition of Cognitive Skill and Tutorial Interventions, Social science computer review, 11, 3, 329-353.
  9. Brown, C. E., Balwin-Morgan, A. A., and Sangster, A. (1995) Expert systems in accounting education--a literature guide. Accounting Education.
  10. Brown, C. E. and Phillips, M. E. (1995) Suitability of Expert Systems Technology for Management Accounting Tasks forthcoming in The International Journal of Intelligent Systems in Accounting, Finance and Management.
  11. Capell, P. and Dannenberg, R.B. (1993). Instructional design and intelligent tutoring: Theory and the precision of design, Journal of Artificial Intelligence and Education, 4, 1, 95-121.
  12. Carroll, J.M. and McKendree, J. (1987). Interface design issues for advice-giving expert systems, Communications of the ACM, 30,1, 14-31.
  13. Collins, A.M. and Brown, J.S. (1988). The computer as a tool for learning through reflection, in Mandl, H. and Lesgold, A., (ed.), Learning Issues for Intelligent Tutoring Systems. New York, NY: Springer-Verlag.
  14. Durkin, John. (1993) Expert Systems: Catalog of Applications. Akron, University of Akron Printing.
  15. Elsom-Cook, M. (1993). Student modeling in intelligent tutoring systems, Artificial Intelligence Review, 7, 3/4, 227-235.
  16. Frasson, C. & Gauthier, G. (eds.) (1990) Intelligent Tutoring Systems: At the Crossroads of Artificial Intelligence and Education, Ablex Publishing Corp.
  17. Graham, L.E., Damens, J. and Van Ness, G. (1991) Developing risk advisor: an expert system for risk identification. Auditing: A Journal of Practice and Theory 10(1), 69-96.
  18. Hayes-Roth, Frederick and Jacobstein, Neil (1994) The State of Knowledge Based Systems. Communications of the ACM, Vol. 37, #3, pp. 27-39.
  19. Heller, R.S. (1990) The role of hypermedia in education: A look at the research issues, Journal of Research on Computing in Education, 22, 4, 431-441.
  20. Jonassen, D. H. (1988) Instructional Designs for Microcomputer Courseware, Hillsdale, NJ: Lawrence Erlbaum Associates.
  21. Kaplan, R. and Rock, D. (1995) New directions for intelligent tutoring, AI Expert, 10, 2, 31-40.
  22. Larkin, J.H. and Chabay, R.W. (1992) (ed.) Computer-Assisted Instruction and Intelligent Tutoring Systems, Hillsdale, NJ: Lawrence Erlbaum Associates.
  23. O'Leary, D. E. and Watkins, P. R. (1992) Expert Systems in Finance. Amsterdam: North Holland.
  24. Regian, J.W. and Shute, V.J. (1992) Cognitive Approaches to Automated Instruction, Hillsdale, NJ: Lawrence Erlbaum Associates.
  25. Seidel, R.J. and Park, O. (1994) An historical perspective and a model for evaluation of intelligent tutoring systems, Journal of educational computing research, 10, 2, 103-128.
  26. Shirk, H.N. (1992) Cognitive architecture in hypermedia instruction, in Barrett, E. (ed.) Sociomedia, Cambridge, MA: MIT Press, 79-93.
  27. Srivastava, A. and Vaishnavi, V. (1993) A framework for development of generalized intelligent systems for tutoring (GIST): An object-oriented hypermedia approach, The Journal of computer information systems, 34, 2, 62-66.
  28. Teer, F. and Teer, H. (1994) The trend in expert systems coverage in business schools and implications for people in industry. International Journal of Applied Expert Systems, 2(3).
  29. Sangster, A. & Wilson, R. A. (1991) Knowledge-based learning within the accounting curriculum. British Accounting Review, (23) 243-261.
  30. Vasarhelyi, M.A. (1995a) Artificial Intelligence in Accounting and Auditing, Volume 2: Using Expert Systems. Princeton, NJ: Markus Wiener Publishing.
  31. Vasarhelyi, M.A. (1995b) Artificial Intelligence in Accounting and Auditing, Volume 3: Knowledge Representation, Accounting Applications and the Future. Princeton, NJ: Markus Wiener Publishing.
  32. Watkins, P. R. and Eliot, L. B. (1993) Expert Systems in Business and Finance: Issues and Applications, Chicher, UK: John Wiley & Sons.
  33. White, C.E (1994) An analysis of the need for es and ai in accounting education as viewed by educators and practitioners. Third Annual Research Workshop on AI/ES in Accounting, Auditing and Tax, American Accounting Association.
  34. Zahedi, F. (1993) Intelligent Systems for Business: Expert Systems with Neural Networks, Belmot, CA: Wadsworth.

L. Richard Ye & Glen L. Gray (c) 1996. The authors assign to ASCILITE and educational and non-profit institutions a non-exclusive licence to use this document for personal use and in courses of instruction provided that the article is used in full and this copyright statement is reproduced. The authors also grant a non-exclusive licence to ASCILITE to publish this document in full on the World Wide Web and on CD-ROM and in printed form with the ASCILITE 96 conference papers, and for the documents to be published on mirrors on the World Wide Web. Any other usage is prohibited without the express permission of the authors.