FPM and Conda are certainly good viable options (I have tried neither though, only know others who have done so). If you want to lift the burden of compilation and package management from the user, especially if your code gets complex and large, you could write platform-agnostic CMake build scripts that compile your code for the three major platforms (Windows, Linux, macOS, or more, if there is user base), then put the shared library files all together in a Python package structure and upload it to PyPI. There is again this example Python packaging I have done here that includes example setup files. Once your package is ready, building and uploading it to PyPI is as simple as running the following few commands (inside the package root folder, assuming Windows OS Anaconda Command Prompt),
python -m pip install --upgrade pip
python -m pip install --user --upgrade setuptools wheel
pip install --user --upgrade twine
python setup.py sdist bdist_wheel
REM to upload the package to PyPI test portal:
twine upload --repository testpypi dist/*
REM to upload the package to the main portal, only after you test the package on testpypi:
twine upload --repository pypi dist/*
(REM is Windows Batch comment marker and is ignored on the command line). The final structure of the Python package (before running the above commands), that would be built and uploaded to PyPI, is very much like the ones on this GitHub release page: libparamonte_python_darwin_x64.tar.gz, libparamonte_python_linux_x64.tar.gz, libparamonte_python_windows_x64.zip, except that your PyPI package can combine all platform-specific shared files into the same package (if you like so). The important required build files for the PyPI package are setup.py and MANFEST.in.
I have been long trying to write a post on how to do this with an example Python-Fortran package from start to end, but have not had the time for it yet. Until then, I would be happy to provide further help here, if needed.