@ariane-millot let's try to document and add on the repo all the datasets used to run the different scripts. In the latest pull, I am missing several of the raster datasets that you probably used, thus I cannot run all elements of the code. We can use this issue to record all vector and raster datasets we use (name, source, justification) and then add them to a common folder preferably on GitHub. If the size surpasses GitHub's allowance (~50MB) then we:
- either limit the test areas from Cooperbelt to sth smaller or
- store everything on an external folder (e.g., G.drive, SharePoint) and add the link to the repo.
The latter would require a slight restructure of the scripts too. This will however allow easier testing moving forward, especially since we want to start merging the different pieces.
@ariane-millot let's try to document and add on the repo all the datasets used to run the different scripts. In the latest pull, I am missing several of the raster datasets that you probably used, thus I cannot run all elements of the code. We can use this issue to record all vector and raster datasets we use (name, source, justification) and then add them to a common folder preferably on GitHub. If the size surpasses GitHub's allowance (~50MB) then we:
The latter would require a slight restructure of the scripts too. This will however allow easier testing moving forward, especially since we want to start merging the different pieces.