diff --git a/doc/conf.py b/doc/conf.py index 5eb312dc..b652d667 100644 --- a/doc/conf.py +++ b/doc/conf.py @@ -64,7 +64,6 @@ # generate autosummary even if no references autosummary_generate = True - # Temporary work-around for spacing problem between parameter and parameter # type in the doc, see https://github.com/numpy/numpydoc/issues/215. The bug # has been fixed in sphinx (https://github.com/sphinx-doc/sphinx/pull/5976) but diff --git a/doc/contrib/algorithm.rst b/doc/contrib/algorithm.rst new file mode 100644 index 00000000..93cd1f42 --- /dev/null +++ b/doc/contrib/algorithm.rst @@ -0,0 +1,43 @@ +.. _implement-new:: + +========================= +Implement a new algorithm +========================= + +Criteria +^^^^^^^^ + +If you want to implement an algorithm and include it in the library, you need +to be aware of the criteria that exists in order to be accepted. In general, +any new algorithm must have: + +- A publication with a reasonable number of citations. +- A reference implementation or published inputs/outputs that we can validate + our version against. +- An implementation that doesn't require thousands of lines of new code, or + adding new mandatory dependencies. + +Of course, any of these three guidelines could be ignored in special cases. On +the other hand, we should prioritize the algorithms that have: + +- Larger number of citations +- Common parts that can be reused by other/existing algorithms +- Better proven performance over other similar/existing algorithms + + +Algorithm wish list +^^^^^^^^^^^^^^^^^^^ + +Some desired algorithms that are not implemented yet in package can be found +`here `_ and +`here `_. + +How to +^^^^^^ + +1. First, you need to be familiar with the metric-learn API, so check out the + :ref:`api-structure` first. +2. Propose in `Github Issues + `_ the algorithm + you want to incorporate to get feedback from the core developers. +3. If you get a green light, follow the guidelines on :ref:`contrib-code` diff --git a/doc/contrib/api.rst b/doc/contrib/api.rst new file mode 100644 index 00000000..13051e0c --- /dev/null +++ b/doc/contrib/api.rst @@ -0,0 +1,89 @@ +.. _api-structure: + +============= +API Structure +============= + +The API structure of metric-learn is insipred on the main classes from scikit-learn: +``Estimator``, ``Predictor``, ``Transformer`` (check them +`here `_). + + +BaseMetricLearner +^^^^^^^^^^^^^^^^^ + +All learners are ``BaseMetricLearner`` wich inherit from scikit-learn's ``BaseEstimator`` +class, so all of them have a ``fit`` method to learn from data, either: + +.. code-block:: + + estimator = estimator.fit(data, targets) + +or + +.. code-block:: + + estimator = estimator.fit(data) + +This class has three main abstract methods that all learners need to implement: + ++---------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ +| **Abstract method** | **Description** | ++---------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ +| pair_score | Returns the similarity score between pairs of points (the larger the score, the more similar the pair). For metric learners that learn a distancethe score is simply the opposite of the distance between pairs. | ++---------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ +| pair_distance | Returns the (pseudo) distance between pairs, when available. For metric learrners that do not learn a (pseudo) distance, an error is thrown instead. | ++---------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ +| get_metric | Returns a function that takes as input two 1D arrays and outputs the value of the learned metric on these two points. Depending on the algorithm, it can return a distance or a similarity function between pairs. | ++---------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + +As you may noticed, the algorithms can learn a (pseudo) distance or a similarity. Most +algorithms in the package learn a Mahalanobis metric, and have these three methods +available, but for similarity learners ``pair_distance`` must throw an error. If you +want to implement an algorithm of this kind, take this into account. + +MetricTransformer +^^^^^^^^^^^^^^^^^ + +Following scikit-learn's ``Transformer`` class gidelines, Mahalanobis learners inherit +from a custom class named ``MetricTransformer`` wich only has the ``transform`` method. +With it, these learners can apply a linear transformation to the input: + +.. code-block:: + + new_data = transformer.transform(data) + +Mixins +^^^^^^ + +Mixins represent the `metric` that algorithms need to learn. As of now, two main +mixins are available: ``MahalanobisMixin`` and ``BilinearMixin``. They inherit from +``BaseMetricLearner``, and/or ``MetricTransformer`` and **implement the abstract methods** +needed. Later on, the algorithms inherit from the Mixin to access these methods while +computing distance or the similarity score. + +As many algorithms learn the same metric, such as Mahalanobis, its useful to have the +Mixins to avoid duplicated code, and to make sure that these metrics are computed +correctly. + +Classifiers +^^^^^^^^^^^ + +Weakly-Supervised algorithms that learn from tuples such as pairs, triplets or quadruplets +can also classify unseen points, using the learned metric. + +Metric-learn has three specific plug-and-play classes for this: ``_PairsClassifierMixin``, +``_TripletsClassifierMixin`` and ``_QuadrupletsClassifierMixin``. All inherit from +``BaseMetricLearner`` to access the methods described earlier. + +All these classifiers implement the following methods: + ++---------------------+-------------------------------------------------------------------------------------+ +| **Abstract method** | **Description** | ++---------------------+-------------------------------------------------------------------------------------+ +| predict | Predicts the ordering between sample distances in input pairs/triplets/quadruplets. | ++---------------------+-------------------------------------------------------------------------------------+ +| decision_function | Returns the decision function used to classify the pairs. | ++---------------------+-------------------------------------------------------------------------------------+ +| score | Computes score of pairs/triplets/quadruplets similarity prediction. | ++---------------------+-------------------------------------------------------------------------------------+ diff --git a/doc/contrib/contributing.rst b/doc/contrib/contributing.rst new file mode 100644 index 00000000..8ea66584 --- /dev/null +++ b/doc/contrib/contributing.rst @@ -0,0 +1,381 @@ +============ +Contributing +============ + +This project is a community effort, and everyone is welcome +to contribute. + +The project is hosted on https://github.com/scikit-learn-contrib/metric-learn/ + +The decision making process and governance structure of metric-learn +is laid out in the governance document: :ref:`governance`. + +Metric-learn is somewhat selective when it comes to adding new +algorithms, and the best way to contribute and to help the project +is to start working on known issues. + +In case you experience issues using this package, do not hesitate to +submit a ticket to the `GitHub issue tracker +`_. +You are also welcome to post feature requests or pull requests. + +Our community, our values +========================= + +We are a community based on openness and friendly, didactic, discussions. + +We aspire to treat everybody equally, and value their contributions. We +are particularly seeking people from underrepresented backgrounds in Open +Source Software and metric-learn in particular to participate and contribute +their expertise and experience. + +Decisions are made based on technical merit and consensus. + +Code is not the only way to help the project. Reviewing pull requests, +answering questions to help others on mailing lists or issues, organizing +and teaching tutorials, working on the website, improving the documentation, +are all priceless contributions. + +We abide by the principles of openness, respect, and consideration of others +of the Python Software Foundation: https://www.python.org/psf/codeofconduct/ + +Ways to contribute +================== + +There are many ways to contribute to metric-learn, with the most common +ones being contribution of code or documentation to the project. Improving +the documentation is no less important than improving the library itself. +If you find a typo in the documentation, or have made improvements, do not +hesitate to send an email to the mailing list or preferably submit a GitHub +pull request. Full documentation can be found under the doc/ directory. + +But there are many other ways to help. In particular helping to improve, +triage, and investigate issues and reviewing other developers’ pull +requests are very valuable contributions that decrease the burden on the +project maintainers. + +Another way to contribute is to report issues you’re facing, and give a +“thumbs up” on issues that others reported and that are relevant to you. +It also helps us if you spread the word: reference the project from your +blog and articles, link to it from your website, or simply star to say +“I use it”: + +In case a contribution/issue involves changes to the API principles or +changes to dependencies or supported versions, it must be backed by a +:ref:`mlep`, where a MLEP must be submitted as a new +`Github Discussion +`_ +using the :ref:`mlep-template` and follows the decision-making process +outlined in metric-learn +:ref:`governance`. + +Submitting a bug report or a feature request +============================================ + +We use GitHub issues to track all bugs and feature requests; feel free +to open an issue if you have found a bug or wish to see a feature +implemented. + +In case you experience issues using this package, do not hesitate to +submit a ticket to the `Bug Tracker +`_. +You are also welcome to post feature requests or pull requests. + +It is recommended to check that your issue complies with the following +rules before submitting: + +- Verify that your issue is not being currently addressed by other + `issues `_ + or `pull requests + `_. + +- If you are submitting an algorithm or feature request, please + verify that the algorithm fulfills our new algorithm requirements. + +- If you are submitting a bug report, we strongly encourage you to + follow the guidelines in How to make a good bug report. + +How to make a good bug report +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +When you submit an issue to `Github +`_, please +do your best to follow these guidelines! This will make it a lot easier +to provide you with good feedback: + +- The ideal bug report contains a **short reproducible code snippet**, + this way anyone can try to reproduce the bug easily (see `this + `_ + for more details). If your snippet is longer than around 50 lines, + please link to a `gist `_ or a github repo. + +- If not feasible to include a reproducible snippet, please be specific + about what **metric learners and/or functions are involved and the + shape of the data**. + +- Please include your **operating system type and version number**, as + well as your **Python, metric-learn, scikit-learn, numpy, and scipy** + versions. This information can be found by running the following + code snippet: + +.. code-block:: + + import platform; print(platform.platform()) + import sys; print("Python", sys.version) + import numpy; print("NumPy", numpy.__version__) + import scipy; print("SciPy", scipy.__version__) + import sklearn; print("Scikit-Learn", sklearn.__version__) + import metric_learn; print("Metric-Learn", metric_learn.__version__) + +- Please ensure all code snippets and error messages are formatted + in appropriate code blocks. See `Creating and highlighting code + blocks `_ + for more details. + +.. _contrib-code:: + +Contributing code +================= + +.. note:: + + To avoid duplicating work, it is highly advised that you search + through the issue tracker and the PR list. If in doubt about duplicated + work, or if you want to work on a non-trivial feature, it’s recommended + to first open an issue in the issue tracker to get some feedbacks from + core developers. + + One easy way to find an issue to work on is by applying the “help wanted” + label in your search. This lists all the issues that have been unclaimed + so far. In order to claim an issue for yourself, please comment exactly + `take` on it for the CI to automatically assign the issue to you. + +How to contribute +^^^^^^^^^^^^^^^^^ + +The preferred way to contribute to metric-learn is to fork the `main +repository `_ +on GitHub, then submit a “pull request” (PR). + +In the first few steps, we explain how to locally install metric-learn, +and how to set up your git repository: + +1. `Create an account on GitHub `_ if + you do not already have one. +2. Fork the `project repository + `_: click + on the ‘Fork’ button near the top of the page. This creates a copy + of the code under your account on the GitHub user account. For more + details on how to fork a repository see `this guide + `_. +3. Clone your fork of the metric-learn repo from your GitHub account + to your local disk: + + .. code-block:: bash + + git clone git@github.com:YourLogin/metric-learn.git # add --depth 1 if your connection is slow + cd metric-learn + +4. Install the development dependencies: + + .. code-block:: bash + + pip install numpy scipy scikit-learn pytest matplotlib skggm sphinx shinx_rtd_theme sphinx-gallery numpydoc + +5. Install metric-learn in editable mode: + + .. code-block:: bash + + pip install -e . + +.. _upstream: + +6. Add the ``upstream`` remote. This saves a reference to the main + metric-learn repository, which you can use to keep your repository + synchronized with the latest changes: + + .. code-block:: bash + + git remote add upstream https://github.com/scikit-learn-contrib/metric-learn + +7. Synchronize your ``main`` branch with the ``upstream/main`` branch, + more details on `GitHub Docs + `_: + + .. code-block:: bash + + git checkout main + git fetch upstream + git merge upstream/main + +8. Create a feature branch to hold your development changes: + + .. code-block:: bash + + git checkout -b my_feature + + and start making changes. Always use a feature branch. It's good + practice to never work on the ``main`` branch! + +9. Develop the feature on your feature branch on your computer, using Git to + do the version control. When you're done editing, add changed files using + ``git add`` and then ``git commit``: + + .. code-block:: bash + + git add modified_files + git commit + + to record your changes in Git, then push the changes to your GitHub + account with: + + .. code-block:: bash + + git push -u origin my_feature + +10. Follow `these + `_ + instructions to create a pull request from your fork. + + +It is often helpful to keep your local feature branch synchronized with the +latest changes of the main scikit-learn repository: + +.. code-block:: bash + + git fetch upstream + git merge upstream/main + +Subsequently, you might need to solve the conflicts. You can refer to the +`Git documentation related to resolving merge conflict using the command +line +`_. + +.. topic:: Learning git: + + The `Git documentation `_ and + http://try.github.io are excellent resources to get started with git, + and understanding all of the commands shown here. + +Pull request checklist +^^^^^^^^^^^^^^^^^^^^^^ + +Before a PR can be merged, it needs to be approved by two core developers. +Please prefix the title of your pull request with ``[MRG]`` if the +contribution is complete and should be subjected to a detailed review. An +incomplete contribution -- where you expect to do more work before receiving +a full review -- should be prefixed ``[WIP]`` (to indicate a work in +progress) and changed to ``[MRG]`` when it matures. WIPs may be useful to: +indicate you are working on something to avoid duplicated work, request +broad review of functionality or API, or seek collaborators. WIPs often +benefit from the inclusion of a `task list +`_ in +the PR description. + +In order to ease the reviewing process, we recommend that your contribution +complies with the following rules before marking a PR as ``[MRG]``. The +**bolded** ones are especially important: + +1. **Give your pull request a helpful title** that summarises what your + contribution does. This title will often become the commit message once + merged so it should summarise your contribution for posterity. In some + cases "Fix " is enough. "Fix #" is never a + good title. + +2. **Make sure your code passes the tests**. The whole test suite can be run + with `pytest`, if all tests pass, you are ready to push your changes, + otherwise the CI will detect some tests don't pass later on, you need + to avoid this. + + Check the :ref:`testing_guidelines` for more details on testing. + +3. **Make sure your code is properly commented and documented**, and **make + sure the documentation renders properly**. To build the documentation, please + refer to our :ref:`contribute_documentation` guidelines. + +4. **Tests are necessary for enhancements to be + accepted**. Bug-fixes or new features should be provided with + `non-regression tests + `_. These tests + verify the correct behavior of the fix or feature. In this manner, further + modifications on the code base are granted to be consistent with the + desired behavior. In the case of bug fixes, at the time of the PR, the + non-regression tests should fail for the code base in the ``main`` branch + and pass for the PR code. + +5. **Make sure that your PR does not add PEP8 violations**. To check the + code that you changed, you can run the following command (see + :ref:`above ` to set up the ``upstream`` remote): + + .. code-block:: bash + + git diff upstream/main -u -- "*.py" | flake8 --diff + + or `make flake8-diff` which should work on unix-like system. + + You can also run the following code while you develop, to check your that + the coding style is correct: + + .. code-block:: bash + + flake8 --extend-ignore=E111,E114 --show-source --exclude=venv + +.. _testing_guidelines: + +Testing guidelines +^^^^^^^^^^^^^^^^^^ + +Follow these simple guidelines to test your new feature/module: + +1. Place all yout tests in the `test/` directory. All new tests + must be under a new file named `test_my_module_name.py`. Discuss + in your pull request where these new tests should be put in the + package later on. +2. All test methods inside this file must start with the `test_` + prefix, so pytest can detect and execute them. +3. Use a good naming for your tests that matches what it actually + does. +4. Comment each test you develop, to know in more detail what it + is intended to do and check. +5. Use pytest decorators. The most important one is `@pytest.mark.parametrize`. + That way you can test your method with different values without + hard-coding them. +6. If you need to raise a `Warning`, do a test that verifies that + the warning is being shown. Same for `Errors`. Some examples might + be warnings about a default configuration, a wrong input, etc. + +.. _building-the-docs: + +Building the docs +^^^^^^^^^^^^^^^^^ + +To build the docs is always recommended to start with a fresh virtual +environment, to make sure that nothing is interfering with the process. + +1. Create a new Python virtual environment named `venv` + + .. code-block:: bash + + python3 -m venv venv + +2. Install all dependencies needed to render the docs + + .. code-block:: bash + + pip3 install numpy scipy scikit-learn pytest matplotlib skggm sphinx shinx_rtd_theme sphinx-gallery numpydoc + +3. Install your local version of metric_learn into the virtual environment, + from the root directory. + + .. code-block:: bash + + pip3 install -e . + +5. Go to your doc directory and complies + + .. code-block:: bash + + cd doc + make html + +6. Open the `index.html` file inside `doc/_build/html` \ No newline at end of file diff --git a/doc/contrib/governance.rst b/doc/contrib/governance.rst new file mode 100644 index 00000000..7dfb93b7 --- /dev/null +++ b/doc/contrib/governance.rst @@ -0,0 +1,89 @@ +.. _governance: + +=========================================== +Metric learn governance and decision-making +=========================================== + +The purpose of this document is to formalize the governance process used by +the metric-learn project, to clarify how decisions are made and how the +various elements of our community interact. This document establishes a +decision-making structure that takes into account feedback from all +members of the community and strives to find consensus, while avoiding +any deadlocks. + +This is a meritocratic, consensus-based community project. Anyone with +an interest in the project can join the community, contribute to the +project design and participate in the decision making process. This +document describes how that participation takes place and how to set +about earning merit within the project community. + +Roles and Responsibilities +========================== + +Contributors +^^^^^^^^^^^^ + +Contributors are community members who contribute in concrete ways to +the project. Anyone can become a contributor, and contributions can +take many forms – not only code – as detailed in the contributors guide. + +Core developers +^^^^^^^^^^^^^^^ + +Core developers are community members who have shown that they are +dedicated to the continued development of the project through ongoing +engagement with the community. They have shown they can be trusted to +maintain metric-learn with care. Being a core developer allows +contributors to more easily carry on with their project related +activities by giving them direct access to the project’s repository and +is represented as being an organization member on the metric-learn GitHub +organization. Core developers are expected to review code contributions, +can merge approved pull requests, can cast votes for and against merging +a pull-request, can label and close issues, and can be involved in +deciding major changes to the API. + + +Decision Making Process +======================= + +Decisions about the future of the project are made through discussion +with all members of the community. Metric-learn uses a “consensus seeking” +process for making decisions. + +The group tries to find a resolution that has no open objections among +core developers. At any point during the discussion, any core-developer +can call for a vote. + +Decisions are made according to the following rules: + +- Minor Documentation changes, such as typo fixes, or addition/correction + of a sentence, requires +1 by a core developer, no -1 by a core + developer (lazy consensus), happens on the issue or pull request page. + Core developers are expected to give “reasonable time” to others to give + their opinion on the pull request if they’re not confident others + would agree. + +- Code changes and major documentation changes require +1 by two core + developers, no -1 by a core developer (lazy consensus), happens on + the issue of pull-request page. + +- Changes to the API principles and changes to dependencies or supported + versions happen via a Enhancement proposals (MLEPs) and follows the + decision-making process outlined above. + +If a veto -1 vote is cast on a lazy consensus, the proposer can appeal +to the community and core developers and the change can be approved or +rejected using the decision making procedure outlined above. + +.. _mlep: + +Enchancement proposals (MLEPs) +============================== + +For vote on API changes, a proposal must have been made public and discussed +before the vote. Such proposal must be a consolidated document, in +the form of a ‘Metric-Learn Enhancement Proposal’ (MLEP), rather than +a long discussion on an issue. A MLEP must be submitted as a +`Github Discussion +`_ +using the MLEP template. See :ref:`mlep-template`. \ No newline at end of file diff --git a/doc/contrib/index.rst b/doc/contrib/index.rst new file mode 100644 index 00000000..b807008c --- /dev/null +++ b/doc/contrib/index.rst @@ -0,0 +1,37 @@ +.. title:: Contributing: contents + +.. _dev-contrib: + +============ +Developer contributions +============ + +.. toctree:: + :maxdepth: 2 + + contributing + +.. toctree:: + :maxdepth: 2 + + governance + +.. toctree:: + :maxdepth: 2 + + algorithm + +.. toctree:: + :maxdepth: 2 + + api + +.. toctree:: + :maxdepth: 2 + + release + +.. toctree:: + :maxdepth: 2 + + mlep \ No newline at end of file diff --git a/doc/contrib/mlep.rst b/doc/contrib/mlep.rst new file mode 100644 index 00000000..c5a0562c --- /dev/null +++ b/doc/contrib/mlep.rst @@ -0,0 +1,79 @@ +.. _mlep-template: + +============================== +MLEP Template and Instructions +============================== + +:Author: +:Status: +:Type: +:Created: +:Resolution: (required for Accepted | Rejected | Withdrawn) + +Abstract +-------- + +The abstract should be a short description of what the MLEP will achieve. + + +Detailed description +-------------------- + +This section describes the need for the MLEP. It should describe the +existing problem that it is trying to solve and why this MLEP makes the +situation better. It should include examples of how the new functionality +would be used and perhaps some use cases. + + +Implementation +-------------- + +This section lists the major steps required to implement the MLEP. Where +possible, it should be noted where one step is dependent on another, and which +steps may be optionally omitted. Where it makes sense, each step should +include a link related pull requests as the implementation progresses. + +Any pull requests or developmt branches containing work on this MLEP should +be linked to from here. (A MLEP does not need to be implemented in a single +pull request if it makes sense to implement it in discrete phases). + + +Backward compatibility +---------------------- + +This section describes the ways in which the MLEP breaks backward +compatibility. + + +Alternatives +------------ + +If there were any alternative solutions to solving the same problem, they +should be discussed here, along with a justification for the chosen +approach. + + +Discussion +---------- + +This section may just be a bullet list including links to any discussions +regarding the MLEP: + +- This includes links to mailing list threads or relevant GitHub issues. + + +References and Footnotes +------------------------ + +.. [1] Each MLEP must either be explicitly labeled as placed in the public + domain (see this MLEP as an example) or licensed under the `Open + Publication License`_. + +.. _Open Publication License: https://www.opencontent.org/openpub/ + + +Copyright +--------- + +This document has been placed in the public domain. [1]_ diff --git a/doc/contrib/release.rst b/doc/contrib/release.rst new file mode 100644 index 00000000..ef72a6ce --- /dev/null +++ b/doc/contrib/release.rst @@ -0,0 +1,77 @@ +.. _release: + +======================== +Publishing a new release +======================== + +This section has the information needed to do different tasks related to +releases. This task is usually performed by core developers but it +might be requested to other developers in the future. + +Before the release +================== + +1. Make sure that all versions numbers are updated in ``metric_learn/_version.py``` + and in ``doc/conf.py```. +2. Make sure that the final year date in doc/conf.py after copytight is the right + one (e.g 2022) +3. Do the release summary indicating: Changes, Features, Bug Fixes, Maintenance, + and any relevant information . + +Pre-release +=========== + + +1. Create branch v0.1.0-release from master that includes all changes mentioned above, e.g. version number, etc. +2. Turn this branch into protected branch. +3. Create a release v0.1.0-rc0 ("rc" stands for release candidate) from this branch v0.1.0-release via GitHub UI `here `_ and mark it as pre-release. +4. Give the community some time (e.g. 1-2 weeks) to test the release candidate. +5. If there are any bug fixes, we push to v0.1.0-release branch and then release v0.1.0-rc1. +6. Once we are confident, we create a stable release v0.1.0, build the package wheels, and then publish the package v0.1.0 to PyPI (instructions below) + +Release +======= + +1. On Github, click on "draft a new release" button here: + https://github.com/scikit-learn-contrib/metric-learn/releases, draft the release + (you can look in the link at the commits made since the last release to help write + the message of the release). Then click on "Publish release" (it will automatically + add the files needed). +2. Run the following commands in the repo in a local terminal: this will push the repo + to PyPi (you need an account on PyPi) + + .. code-block:: bash + + python3 setup.py sdist + python3 setup.py bdist_wheel + twine upload dist/* + +3. Normally after a few minutes you should see that the badge on the ``README.rst`` gets + updated with the new version, also the version is available if you search for it + on PyPi. + + +Publish the docs +================ + +1. Make sure you can build the docs. Follow the steps from the section + :ref:`building-the-docs`. +2. Once you've built the docs, copy all the content inside ``doc/_build/html`` into a temporary + folder. +3. Checkout to ``gh-pages`` branch. +4. Delete everything except the ``.git`` and the ``.nojekyll`` (`reference `_) +5. Paste the content of ``doc/_build/html`` in the root directory of this branch. +6. Push your changes and create a PR. +7. Once the PR is merged, the website will be automatically updated with the latest changes + +.. note:: + + This process should be automated. Feel free to create a PR for this + `open issue `_. + +Update Pypi and Conda +===================== + +1. If requirements, license, etc have not changed. PR is automatically created in the `feedstock repository `_ for conda-forge. +2. Otherwise, one dev can edit it and directly push to the PR (see `here `.). Then merging the PR (which requires to be identified as a maintainer of the feedstock). +3. If it is not visible `here `_ a dev can do an empty commit to master to trigger the update. \ No newline at end of file diff --git a/doc/index.rst b/doc/index.rst index f9dfd83d..ceb40299 100644 --- a/doc/index.rst +++ b/doc/index.rst @@ -55,6 +55,11 @@ Documentation outline auto_examples/index +.. toctree:: + :maxdepth: 2 + + contrib/index + :ref:`genindex` | :ref:`search` .. |GitHub Actions Build Status| image:: https://github.com/scikit-learn-contrib/metric-learn/workflows/CI/badge.svg diff --git a/doc/preprocessor.rst b/doc/preprocessor.rst index ad1ffd8f..2d817461 100644 --- a/doc/preprocessor.rst +++ b/doc/preprocessor.rst @@ -4,9 +4,11 @@ Preprocessor ============ -Estimators in metric-learn all have a ``preprocessor`` option at instantiation. -Filling this argument allows them to take more compact input representation -when fitting, predicting etc... +.. rst-class:: deprecated + + Estimators in metric-learn all have a ``preprocessor`` option at instantiation. + Filling this argument allows them to take more compact input representation + when fitting, predicting etc... If ``preprocessor=None``, no preprocessor will be used and the user must provide the classical representation to the fit/predict/score/etc... methods of