This document refines the evaluation methods for the competition that were introduced in the technical annex of the call . Specifically, it refers to the procedures and parameters of the tests and to the criteria "Accuracy", “User Acceptance", "Integrability". 1. Organization and Test Procedures A few notes about the organization of the competition: - The organizers give an appointment to each competitor so that each competitor may stay at the living lab for the time necessary for the evaluation of his/her system.
- Only one competitor at a time is admitted in the living lab (no other competitors will be admitted during the evaluation of the other competitors). The reason is that we will use the same paths for the tracking tests of all competitors, and such paths cannot be disclosed before the actual test because this might invalidate the results.
- The path to be tracked will not be disclosed to anybody before entering the living lab for the competition
- The areas of interest are described here.
- We will give the opportunity to each competitor to repeat the test one time and to opt for the best result between the first try and the second.
- The organizers will not provide any tools for bricolage, hence competitors should bring all the tools they need. If necessary, there are several mall near the living lab where the competitors may buy some tools. In any case, if competitors need to hang devices on the walls and they plan to use glue, nails or any other method that may damage the living lab they are requested to contact the organizers as soon as possible.
Installation of localization systems at the living lab: - Each competitor will be given up to 3 hours to: install the system, execute the tests, uninstall everything. Specifically, we plan to give at most 1 hour for the installation. If any competitor feels that he/she cannot meet this requirement, he/she is invited to communicate this as soon as possible to the organizers.
- After the installation the organizers will make a preliminary test to ensure that the connection between the competing system and the evaluation tools is correctly established. The time required to solve any issue related to this does NOT count for the Installation Complexity criteria.
- The porch will be part of the localization area of the LivingLab, this means that it will be possible that the pre-defined pathsfollowed by the actor could be also on the porch. The porch is covered with a ceiling and it is possible to install sensors there.
In all the tests the competing systems will localize the movements of an actor (a person of the organization trained in moving along pre-defined paths). 2. Integration with the domotic bus Competitors will be able to receive events generated by the domotic bus of the Living Lab, specifically events related to: - light switches of the lab (ON/OFF)
- a stationary bicycle (in use/no in use)
The coordinates of these "interaction points" are fixed and known a priori. Here you can find their official coordinates. Competitors can exploit this live feedback in order to improve their localization algorithms (e.g. when an event is received the algorithm is re-calibrated). The details of how to receive this data are explained in the next point. 3. Integration with the benchmarking system Competitors are kindly requested to perform a simple SW integration with our benchmarking system. For this purpose, we have created an "integration package" with a developer's handbook. The package allows sending localization samples and receiving events from the domotic bus. Roughly, each localization sample is identified by: - timestamp of event (as estimated by the localization system, can be different from the current one)
- coordinates of the localized subject
- Areas of Interest the subject is located in (it can be more than one)
Our system is based on the universAAL middleware, and, although it is possible to deepen inside the details of the platform, we have abstracted all the middleware and created a simplifed interface for the competition. The package allows two options: - implementing a java interface which will be used by the middleware to send samples
- sending data to a local socket while an isntance of the middleware is running and listening to it. This solution allows systems which are not based on java to run in any case.
All the instrcutions and details about the integration package are contained in this manual. 4. Time Synchronization Since the evaluation tool will rely competitor's machine for timestamping the localization packets, the two computers (the one with the REAL position/timestamps and the competitor's one) must be synchronized. The competitors MUST come with a NTP client installed on his computer and they MUST run it before starting the benchmark. The organizer will verify that the clock of the two computers are actually synchronised. 5. Refinement of Criteria Accuracy: For what concerns the accuracy, the score evaluated during the phase 1 (AoI benchmark), we give half a point for the cases of confusion between a big and a small areas. For instance, in the cases: - system indicates AOI4, but really is AOI41
- system indicates AOI41, but really is AOI4
both get 0.5 points. User Acceptance: The following questions will be used by the evaluation committee to evaluate each localization system. Answers to the questions are in the form of yes/no/NA. The realibaility of questionnaire has been pre-evaluated against last year's localization sytems. Tags - Does your system need the user to bring something with him all the time?
- Is this "thing" easily wearable?
- Will it be in the future?
- Can it be brought outside without inconvenience?
Environment - Is your installation well hidden in the house?
- Will it be ?
- Does your installation need considerably more cabling than a typical Home PC installation (including for example printer, scanner, phone connector etc.)?
- Can your installation be a piece of furniture?
Presence - Does your system use video cameras?
- Is your system always visible from every localizable point?
Maintenance - Does the user need to replace batteries?
- Does the user need to replace batteries in less than three months?
- Does the user need to replace batteries in fixed appliances?
- Does the user need to re-calibrate the system?
- Does the user need to re-calibrate the system in less than three months?
Integrability: Integrability will be evaluated with a questionnaire. Answers to the questions are in the form of yes/no/NA. The realibaility of questionnaire has been pre-evaluated against last year's localization sytems. Code integration - Do you provide any API for integrating your system into others?
- Will you provide it in the future?
- Do you provide any written documentation for this API?
- Will you provide it in the future?
- Any code documentation (like javadoc) ?
- Any tutorial?
- Any sample application ?
- Do you publish your code as open source?
- Do you use any well known application-level protocol which would allow an external system to plug into the system without requiring any further specification about the protocol? (e.g. SOAP, XML-RPC, DPWS, BUT NOT TCP/IP, Ethernet)
Testing and configuration - Do you provide any tool for testing/monitoring the system?
- Will you provide it in the future?
- Is the tool graphical, easy to be used?
- Do you provide any tool for configuring/calibrating your system?
- Will you provide it in the future?
- Is the tool graphical, easy to be used?
Portability - On which operating systems does your system run on?
- Windows?
- Unix/linux?
- MacOS?
- Others?
- Do you know of any incompatibility of your SW with others?
- Can part of your code be substituted by another existing one (that you know of)? [e.g. other existing libraries, modules that can be easily plugged to substitute yours]
HW - Can any part of your system be substituted by another commercially available one WITHOUT ANY modification on the HW and/or the firmware?
- Is your firmware publicly available?
- Do you publish your firmware as opensource?
|