In this study we present our developed formative assessment tool for students' assignments. The tool enables lecturers to define assignments for the course and assign each problem in each assignment a list of criteria and weights by which the students' work is evaluated. During assessment, the lecturers feed the scores for each criterion with justifications. When the scores of the current assignment are completely fed in, the tool automatically generates reports for both students and lecturers. The students receive a report by email including detailed description of their assessed work, their relative score and their progress across the criteria along the course timeline. This information is presented via charts generated automatically by the tool based on the scores fed in. The lecturers receive a report that includes summative (e.g., averages, standard deviations) and detailed (e.g., histogram) data of the current assignment. This information enables the lecturers to follow the class achievements and adjust the learning process accordingly. The tool was examined on two pilot groups of college students that study a course in (1) Object-Oriented Programming (2) Plane Geometry. Results reveal that most of the students were satisfied with the assessment process and the reports produced by the tool. The lecturers who used the tool were also satisfied with the reports and their contribution to the learning process., {"references":["","Ala-Mutka K. M. (2005). A Survey of Automated Assessment\nApproaches for Programming Assignments, Computer Science\nEducation, 15:2, 83-102","Aksu, H.H. (2008). A Study on the Determination of Secondary School\nMathematics Teachers' Views on Alternative Assessment.Humanity &\nSocial Sciences Journal, 3 (2), 89-96.","Bangert-Drowns, R.L., Kulick, J.A., and Morgan, M.T. (1991). The\ninstructional effect of feedback in test-like events.Review of Educational\nResearch, 61 (2): 213-238.","Black, P. &Wiliam, D. (2005). Developing a theory of formative\nassessment. In: J. Gardner (Ed), Assessment and learning (pp. 81-100).\nLondon,Sage.","Boehm, B. W., Brown, J. R., &Lipow, M. (1976). Quantitative\nevaluation of software quality.In Proceedings of the International\nConference on Software Engineering, pages 592-605. IEEE Computer\nSociety Press, October. Los. Alamitos, CA","Carter, J., English, J., Ala-Mutka, K., Dick, M., Fone, W., Fuller,\n&Sheard, J. (2003). How shall we assess this? ACM SIGCSE Bulletin,\n35(4), 107 – 123.","Cheang, B., Kurnia, A., Lim, A., Oon, W.-C., 2003. On automated\ngrading of programming assignments in an academic\ninstitution.Computer & Education. 41 (2), 121–131.","Crooks, T. (2001). The Validity of Formative Assessments.British\nEducational Research Association Annual Conference, University of\nLeeds.","Douce, C., Livingstone, D. and Orwell, J. (2005). Automatic test-based\nassessment of programming: a review. ACM Journal of Educational\nResources in Computing, 5(3):4\n[10] Gulknecht-Gmeiner, M. (2005). Peer Review in Education, Report.\nLeonardo da Vinci Project, Vienna (pp. 1-74)\nhttp://www.aahe.org/teaching/Peer_Review.htm, accessed May 2008.\n[11] Higgins, C. A., Gray, G., Symeonidis, P. and Tsintsifas, A. (2005).\nAutomated assessment and experiences of teaching programming.ACM\nJournal on Educational Resources in Computing, 5(3):5.\n[12] Hogen, J. &Wiliam, D. (2006). Mathematics inside the black box:\nassessment for learning in the Mathematics classroom. London: NFERNelson\n[13] Howles, T. (2003). Fostering the growth of a software quality\nculture.ACM SIGCSE Bulletin, 35(2), 45 – 47.\n[14] James, R., McInnis, C. & Devlin, M. (2002). Assessing Learning in\nAustralian Universities. Victoria: Centre for the Study of Higher\nEducation, University of Melbourne.\n[15] Joy, M., Griffiths, N. and Boyatt. R. (2005). The BOSS online\nsubmission and assessment system.ACM Journal of Educational\nResources in Computing, 5(3):2.\n[16] Jackson, D., & Usher, M. (1997). Grading Student programs using\nASSYST. Proceedings of the 28th SIGCSE technical symposium on\nComputer science education, USA, 335 – 339.\n[17] Lavy, I. & Shriki, A. (2012). Engaging prospective teachers in the\nassessment of geometrical proofs.In Tso, T.Y. (Ed.).Proceedings of the\n36th Conference of the International Group for the Psychology of\nMathematics Education, vol. 3, pp. 35-42. Taipei, Taiwan: PME.\n[18] Ljungman, A.G. &Silén, C. (2008). Examination involving students as\npeer examiners.Assessment & Evaluation in Higher Education, Vol. 33,\nNo.3, pp. 289 – 300.\n[19] McTighe, J. & O'Connor, K. (2005). Seven practices for effective\nlearning. Educational Leadership, 63,(3) 10-17\n[20] Morris, D. (2003). Automatic Grading of Student's Programming\nAssignments: An Interactive Process and Suite of Programs. In\nProceedings of the 33rd ASEE/IEEE Frontiers in Education Conference,\nS3F-1 – S3F-5.\n[21] Saphier, J. (2005). Masters of Motivation, In Richard DuFour, Robert\nEaker, and Rebecca Du-Four, Eds.On Common Ground: the power of\nprofessional Learning Communities. Bloomington, IN: National\nEducation Service.\n[22] Slavin, R. E., Eric A. Hurley, & Chamberlain A.M. (2003).\n\"Cooperative Learning and Achievement.\" In Handbook of Psychology,\nVol. 7: Educational Psychology, edited by W. M. Reynolds and G. J.\nMiller, (pp. 177–98). Hoboken, N.J.: John Wiley & Sons.\n[23] Van den Berg, I., Admiraal, W. &, Pilot A. (2003). Peer assessment in\nuniversity teaching. An exploration of useful designs. The European\nConference on Educational Research, University of Hamburg, pp. 17-20.\n[24] Webb, D.C. (2009). Designing Professional Development for\nAssessment.Educational Designer, 1 (2), 1-26.\n[25] Wiggins, G. &McTighe, J. (2000). Understanding by design. New York:\nPrentice Hall.\n[26] William, D. & Thompson, M. (2007). Integrating Assessment With\nInstruction: What Will It Make It Work?, In C.A. Dwyer (Ed.) The\nFuture of Assessment: shaping teaching and learning. Mahwah, N.J.:\nLawrence Erlbaum Associates."]}