We have started receiving email inquiries regarding exam marking, course performance and special considerations right after the exam. After replying to a few, I have decided to summarize the rest of the inquiries here (and will not reply individually) so that we can focus on marking the exam and reconsidering some of your projects:
Your project result has been sent to your UNSW email.
Please note that email inquiries about marks (e.g., why you received low marks even if you passed the sanity test) are likely to receive slow or no responses during this busy time of the term.
The same marking tests, developed based on the provided specifications, are used across all submissions. If you request us to consider different marking criteria for your particular situation, we would have to decline as that would be unfair to other students.
As mentioned in the last lecture, your project will be manually inspected, and results may be scaled up at our discretion if you perform well in the exam but not in the project.
*You can check the actual marking tests by running ~cs6714/reuters/marking on a CSE Linux machine, as previously posted.
FYI, the cohort performance of the project is as follows:
1st Qu.: 6.140 Median : 7.930 Mean : 7.773 3rd Qu.: 9.485 Max. :11.000 17% of submissions achieved 10-11 out of 10
As mentioned in the last live lecture, we are finalizing the project marks (almost there) and running the tests a few times for cross validation. Your marking result shall then be emailed to you in 2-3 days. Once it's done, I will post another Notice here.
Meanwhile, if you like, you can check and run the marking testcases by:
1) log in to a CSE linux machine and go to the folder containing your index.py and search.py
2) run ~cs6714/reuters/marking
In theory, the results returned from this script that you run should be consistent with the marking result that you will receive.