The two plots show the predicted effect of the current deployment plan on the performance of the two main tasks that have to be executed by each base-station, i.e. searching (for tags that have not been detected lately) and tracking (of tags that have been detected). The upper plot shows the expected number of tags. The lower plot shows a scaled measure of the expected number of localizations, taking into account the number of tags and their sampling rate. In the lower plot, we separated the tags into day (orange) and night (blue), as we hope to implement a way to reduce the sampling frequency during the night/day to reduce the effect on the tracking performance of the system (currently assuming a sampling rate of 0.125Hz). In both plots, the status is divided into three conditions "good", "monitor" and "critical", which imply, respectively, that we do not expect problems, the system performance should be regularly checked, and problems are to be expected. While evaluating the practical conclusions from these two plots to our planned work, we need to consider two potential bottlenecks, one is tracking performance and the other is searching performance. Ideally, we would want the system to allocate its computational time only to tracking, but then tags that are not detected will never be found again. Conversely, if efforts were to be devoted to mostly searching, it would come at the expense of the system’s ability to track the tags it has found.
Assuming we have found all tags - very unlikely - the system can, in theory, support up to 125 tags sampling at 1Hz simultaneously. The system, however, is set to spend 50% of its time tracking and 50% of its time searching, and this is necessary due to shifts in the timing of tags and interference. That means tracking will operate smoothly for up to 60 tags sampling at 1Hz or any equivalent thereof.
Predicting searching performance is pretty much impossible, as too many extraneous factors play a role here. We know from experience that up to 60 tags can be supported without problems, up to 90 will require monitoring, and beyond that, we have real issues - this is pretty much in line with the capabilities of the system spending half of its time searching and the other half tracking.
The Table provides you with a detailed list of all tag deployments for each day, including a summary for every day. If at any given time either of the two thresholds is exceeded, the respective rows will be highlighted in red.
The slider will adjust the date range for both plots and the table, allowing you to narrow in on a specific date range.
These are first example maps, one based on a small subset of the barn owl data collected in 2017 and the other from tracks of egyptian fruit bats, giving a percentage of achieved localizations, estimated from linear interpolation of minor gaps in the original data, within a 250*250m grid. This will give you first indication of the approximate quality of the data that you are going to collect in certain areas of the Hula valley. If an area is not colored, then animals never ventured there. You should also consider that the animal’s behavior (like flight height) will have a strong effect on the quality of your data.
The login functionality is for database requests and admin purposes.