Background
So far I have worked with a repository for one project. (It does not matter what project)
Lately I am working in a project that has several people working on it. The project also has one repository only.
Since the themes all members are working are different, the project has folders (for example:
- Visualization
- Preprocess
- Search
etc. and each of these folders have their own requirements.
The project members put one readme, one requirements file per folder. Based on that there are different virtual environments for each part of the project. The versions of the libraries each part use may be different.
So far it is working great
The problem
Lately I am working on a script (let’s call it myapy.py
) that is going to be run on a remote machine. This is basically an API that calls another script (let’s call it process.py
). For the first time there are several characteristics that deviate from the way I have been working so far:
process.py
has to use scripts from a different repo (a open source repo) that has their own requirements. (A bit older libraries)- To run the Rest API on the remote machine, I just need
myapy.py
and not the whole repository we are working in the project (many folders and many scripts) - However these two scripts are part of what the project is working on.
It would be wasteful to clone the whole repo on the remote just to be able to use a single script(myapi.py
) and also I can not run the other (process.py
) with the usual virtual environment that each project folder has.
This situation is a first for me.
What is the recommendation for this case?
I could make a small repo with the scripts for the remote, but that defeats the purpose of having all related scripts on the Project repo.
What is the practice for this situation?
1