I am pasting below the conversations started by Inske.
PanorAms labelling tool
Hi everyone,
Here a short update on where I stand with my research project.
As I mentioned yesterday during the meeting, I have been working on an online tool to get a small subset of the panoramic images of Amsterdam labelled with the help of human annotators. These labelled images will help me to evaluate the performance of the algorithms that I will be developing to recognize and localize these urban objects within a panoramic image. The labelling tool itself involves adjusting and creating bounding box annotations around urban objects.
It would be great if you would have the time to help me out by annotating some images and testing out this tool. If you do find the time, I would ask you to at least annotate 6 images.
You can find an instruction video on how to use the tool here: https://youtu.be/u7Ghf3nNEns.
The tool itself you can find here: https://panorams-tool.herokuapp.com/.
Let me know what you think, any feedback is most welcome !
Best,
Inske
Hello @Inske I’m going to have a go and see what happens. Just one thing is that some features may not be quite the same in the UK but I will do my best to complete the task .
Now I’ve started using the app I notice you’ve covered how familiar you are with Amsterdam .
Does everyone update the same image(s) or are they all different?
Will you be able to compare results from different municipalities?
How many images are there? I’ve updated 6 images
The ability to go back and update a previous image would be really useful. I’ve realised as I’ve progressed along the street that I’ve incorrectly captured something. Local knowledge is going to be really important - I’m making mistakes that aren’t apparent until you move through the image.
Its difficult to know how ‘deep’ into the image you are to capture e.g. things in the far distance.
Very easy to use.
It would be useful if it would bring the edit you make on one image forward into the next image.
Hi Inske,
Have just had a quick look and hope to try it soon.
Looks good
Hi @Inske , did some as well. I noticed the same images came back several times in a row. Is this intentional?
@Bhupesh have you seen this
@Adrian, @Inske
I will go through it.
@Bhupesh apologies I thought it might be of interest as well
@Adrain, we can take some analogy from this to our future work.
@Bhupesh @Adrian @claus @sydsimpson Thank you all for helping out!
@Adrian Many thanks for your feedback, very helpful!
I will include further instructions on how to correctly label objects, and show some examples of correctly labeled objects. Also, no worries about the mistakes - there is going to be further refinement phase and quality check to further correct the labels
In answer to your questions:
- They are all different. However, some images can be highly similar because the car/boat takes a photo about every 2 meters or so. (@claus This is also in answer to your question. Each user gets a sequence of 3 almost consecutive images, which will thus be very similar to each other)
- Can you elaborate on this? Which results are you referring to?
- There are 547.525 images in the full dataset. However, I am only collecting ground truth bounding boxes via human annotators for a small subset. My aim is to collect a subset of at least 20.000 ground-truth labeled images.
@Inske,
Keen to feed back on this at the Bergen meeting!
After a little practice I thought the user interface worked really well.
Local knowledge is quite relevant, for example I had some difficulty distinguishing cycleways as these are less common in the UK and are perhaps more prominently marked.
Also some classifications need defining e.g. ‘Park’. I take this to mean a publicly accessible open green space, but how ‘formal’ should it be?
Whilst we are in Bergen I wonder if there might be a more general working table on AI and machine learning related to image recognition?
There are several applications that Bradford Council is interested in.
At the moment we are relying on the expertise of @Bhupesh and Bradford University but a more transnational collaboration would be welcome.
@Inske
Will you be able to compare results from different municipalities?
- Can you elaborate on this? Which results are you referring to?
It would just be interesting to know how well each municipality did e.g. if Amsterdam are better than Bradford because of their local knowledge? Or if they are just better
@Adrian
I will be able to compare results between people who know Amsterdam well, and those who do not Amsterdam that well. I’ll share them here once I have them
@sydsimpson
Nice suggestion about having a more general working table on AI. I look forward to exchanging ideas and expertise in Bergen
Hi everyone,
Based on your input, I made quite some changes to my annotation tool.
The most significant is a built-in walk-through tutorial with examples of each type of object and a qualification test. The new version of the tool is now available here: https://vps.inskegroenen.nl/labeltool.
I’ll share some more information on the results once I have analysed them all.
Best,
Inske
Ah, got it now, just have to modify the URL
Yes, to add to my previous post, you need to add the worker_id to the url as mentioned in the initial pop-up you get when you go to the link I shared. This value can be anything (e.g. your name or the name of your favourite animal or some other create name ). Once you leave the tool and come back to it at a later time, be sure to use the same worker_id. This way, you won’t have to go through the tutorial and qualification test again
something like this Syd: https://vps.inskegroenen.nl/labeltool?worker_id=chocolatebutt