Skip to main content
Version: 3.x

Contributing

We welcome all contributions to Saleor, including issues, new features, docs, discussions, and more. Read the following document to learn more about the process of contribution.

Issues​

Use Github Issues to report a bug or a problem that you found in Saleor. Use the "Bug report" issue template to provide information that will help us confirm the bug, such as steps to reproduce, expected behavior, Saleor version, and any additional context. When our team confirms a bug, it will be added to the internal backlog and picked up as soon as possible. When willing to fix a bug, let us know in the issue comment, and we will try to assist you on the way.

New features​

When willing to propose or add a new feature, we encourage you first to open a discussion or an issue (using "Feature request" template) to discuss it with the core team. This process helps us decide if a feature is suitable for Saleor or design it before any implementation starts.

Before merging, any new pull requests submitted to Saleor have to be reviewed and approved by the core team. We review pull requests daily, but if a pull request requires more time or feedback from the team, it will be marked as "queued for review".

Managing dependencies​

Poetry​

To guarantee repeatable installations, all project dependencies are managed using Poetry. The project’s direct dependencies are listed in pyproject.toml. Running poetry lock generates poetry.lock which has all versions pinned.

You can install Poetry by using pip install --pre poetry or by following the official installation guide here. We recommend using at least version 1.0.0b as it contains many fixes and features that Saleor relies on.

tip

We recommend that you use this workflow and keep pyproject.toml as well as poetry.lock under version control to make sure all computers and environments run exactly the same code.

Other tools​

For compatibility, Saleor also provides requirements.txt and requirements_dev.txt.

These files should be updated by running poetry export --without-hashes -f requirements.txt -o requirements.txt and poetry export --without-hashes -f requirements.txt -o requirements_dev.txt --dev, respectively.

File structure​

We are using a standard Django structure - every app has its own directory, where you can find:

  • migrations directory
  • management directory
  • models
  • utils - that keeps some functions of general utility related to this module
  • error codes - the definitions of errors that might appeared
  • tests directory - that contains related tests
  • etc.

API file structure​

The API files are in saleor/graphql/ directory. Every app has its own directory inside with:

  • schema.py - with definitions of queries and mutations
  • sorters.py - with definitions of sorters
  • filters.py - with definitions of filters
  • types.py - with definitions of corresponding types
  • enums.py - with related enums
  • dataloaders.py - with data loaders for the given module
  • mutations file or directory
  • tests directory

We are aiming to have mutations directory in every module with a file per every mutation how it's done in the checkout directory. See an example below:

.
└── saleor
└── graphql
└── checkout
β”œβ”€β”€ mutations
β”‚ └── checkout_create.py
β”‚ └── checkout_complete.py
└── schema.py

Tests file structure​

We are keeping tests in tests directory. We are aiming to keep queries and mutations in separate directories with a single file for every query and mutation. After joining it with the previous example, it would look like this:

.
└── saleor
└── graphql
└── checkout
β”œβ”€β”€ mutations
β”‚Β Β  └── checkout_create.py
β”‚ └── checkout_complete.py
β”œβ”€β”€ schema.py
└── tests
β”œβ”€β”€ mutations
β”‚Β Β  β”œβ”€β”€ test_checkout_complete.py
β”‚Β Β  └── test_checkout_create.py
└── queries
└── test_checkout.py
└── test_checkouts.py

Testing​

Testing is a really important part of every project. In Saleor we're using the pytest library and we mostly have unit tests.

To reduce tests execution time, we are using pytest-xdist that allows running tests on more than one CPU. By default, we are using -n=auto which creates a separate process equal to the number of available CPUs.

We are also using pytest-socket to ensure that our tests do not hit any external API without explicitly allowing this.

The test file structure was introduced in the tests file structure part. The main rule is not overloading test files. Always smaller files are preferable over big ones where lots of logic is tested and it's hard to extend. In the case of testing API we would like to split all tests into mutations and queries section and test every query and mutation in a separate file.

How to run tests?​

To run tests just enter pytest in your terminal.

pytest

To speed testing time, we recommend using the reuse-db flag.

pytest --reuse-db

How to run particular tests?​

As running all tests is quite time-consuming, sometimes you would like to run only tests from one dictionary or even just a particular test. You can use for that the following commands. In the case of a particular directory or file, just provide the path after the pytest command, like:

pytest --reuse-db saleor/graphql/app/tests

If you would like to run a particular test, you need to provide the path to the file where the test is, and the file name after the :: sign. In the case of running a single test it's also worth using -n0 flag to run the test only in one thread. It will significantly decrease time. See an example below:

pytest --reuse-db saleor/graphql/app/tests/mutations/test_app_create.py::test_app_create_mutation -n0

Using pdb when testing​

If you would like to use pdb in code when running a test, you need to use a few flags -n0 to one tests in a single thread, -s to disable capturing standard output by pytest, and --allow-hosts with default port, as we disabled sockets by default. So the previous example will look like this:

pytest --reuse-db saleor/graphql/app/tests/mutations/test_app_create.py::test_app_create_mutation -n0 -s --allow-hosts=127.0.0.1

Recording cassettes​

Some of our tests using VCR.py cassettes to recording request and response from external API. To record one you need to use vcr-record flag and specified allow-hosts:

pytest --vcr-record=once saleor/app/tests/test_app_commands.py --allow-hosts=127.0.0.1

Writing benchmark tests​

The benchmark tests allow us to keep track of a number of database queries for mutations and queries. Tests should be added for every new mutation or query for objects. The benchmark tests are in benchmark directory, every test must be marked with pytest.mark.django_db and pytest.mark.count_queries(autouse=False) decorators and must have count_queries argument. For more details check pytest-django-queries package.

@pytest.mark.django_db
@pytest.mark.count_queries(autouse=False)
def test_apps_for_federation_query_count(
staff_api_client,
permission_manage_apps,
django_assert_num_queries,
count_queries,
):
...

To check the number of queries, just run the benchmark test, and after that call following command in your terminal:

django-queries show

You will see the number of queries that were performed for each test in every file.

Given, when, then​

It's recommended to use given, when, then to distinguish between different parts of the test. It significantly improves test readability and makes clear what you are testing at all.

Don't be afraid of docstrings​

Use docstring, especially for more complicated tests. Describe what is tested, and what is the expected behavior. Usually, the test name describes exactly what we are testing, but often the name is not enough to explain all expected behavior. Also, it will help to see that the test should be split into two, as it tests too much logic at once. In addition it's really helpful in test maintaining.

Coding style​

Saleor uses various tools to maintain a common coding style and help with development. To install all the development tools, run the following commands:

python -m pip install -r requirements_dev.txt

or use poetry:

poetry install

Saleor uses the pre-commit tool to check and automatically fix any formatting issue before creating a git commit.

Run the following command to install pre-commit into your git hooks and have it run on every commit:

pre-commit install

For more information on how it works, see the .pre-commit-config.yaml configuration file.

Saleor has a strict formatting policy enforced by the black formatting tool.

Module names should make their purpose obvious. Avoid generic file names such as utils.py.

Linters​

Use black to make sure your code is correctly formatted.

Use isort to maintain consistent imports.

Use pylint with the pylint-django plugin to catch errors in your code.

Use pycodestyle to make sure your code adheres to PEP 8.

Use pydocstyle to check that your docstrings are properly formatted.

EditorConfig​

EditorConfig is a standard configuration file that aims to ensure consistent style across multiple programming environments.

Saleor’s repository contains an .editorconfig file describing our formatting requirements.

Most editors and IDEs support this file either directly or via plugins. See the list of supported editors and IDEs for detailed instructions.

If you make sure that your programming environment respects the contents of this file, you will automatically get correct indentation, encoding, and line endings.

Tips and recommendation​

Here we go with some tips and recommendation that limits the number of comments to your PR. If you will follow those rules both sites will be happy.

Imports​

Use relative imports.

Models​

  • The UUID is the preferred type of PK for the main models. Especially for models that keeps sensitive data.
  • Use created_at field name for creation date time and updated_at for update date time.
  • The new models should have a default sorting value set. It ensures that data are always returned in the same order and prevent flaky tests.

Migrations​

Try to combine multiple migrations into one, but remember to not mix changes on the database with updating rows in migrations - in other words, operations that alter tables and that which uses RunPython to run methods on existing data should be in separate files.

Utility functions​

Utility function specific for API should go to graphql directory, otherwise to main module directory, usually to utils.py file. When writing such a method try to find a descriptive name as possible. Also, do not forget about the docstring, especially when the function is complicated.

Searching​

Right now, we are mainly using the GinIndex and ilike operator for searching but currently, we are testing a new solution with the use of SearchVector and SearchRank. You can find it in this PR #9344.

API​

  • Use id for mutation inputs instead of model_name_id.
  • Use created_at field name for creation date time and updated_at for update date time.

API field descriptions​

Every mutation, mutation field, and type field should have a short, meaningful description, ending with a dot. Also, we labeled every new field with info in which version it was introduced, and when it's a new feature we added the preview feature label. Both ADDED_IN_X and PREVIEW_FEATURE labels should be at the end of the description. All labels can be found in saleor/graphql/core/descriptions.py.

When we would like to remove the API field, we mark it first as DEPRECATED. In a field definition, there is a dedicated argument for that: deprecation_reason, but in the case of mutation arguments, we are using DEPRECATED_IN_X_INPUT label in the description.

How to define permissions in queries/mutations?​

To define permission in queries use the permission_required decorator, to provide one or multiple permissions that are required or one_of_permissions_required decorator that allows a user with at least one permission to perform the action.

In the case of mutations, the permissions are defined in the Meta arguments in the permissions field. To represent all permissions, in the same way, we introduced AuthorizationFilters enum that represents permission checks that are based on functions instead of named admin permission scopes. When raising PermissionDenied, the error should mention which permissions are required to perform the given action.

note

Required permissions should be mentioned in the GraphQL description.

Error codes​

New mutations should always have their own error class. Example: instead of using generic PaymentErrorCode, for PaymentCreate mutation we could have PaymentCreateErrorCode. The frontend will get a list of error codes that could be triggered by this mutation instead of the list of all errors that could be raised by ALL payment mutations.

The error classes are defined in saleor/graphql/core/types/common.py file. To create the new one, you will also need the graphql enum with error codes. First, create a new enum in the error_codes.py file in the main app directory. Then use it to create graphql enum in saleor/graphql/core/enums.py file. When defining a graphql enum you could specify the enum type name and attach the description for example the enum docstring as is shown below.

AppTypeEnum = to_enum(
AppType,
type_name="AppTypeEnum",
description=AppType.__doc__,
)

Now you are ready to define the error class. The class must inherit from Error and must have a code field. Any other are optional but could be useful to specify what inputs raised an error. Here is an example:

class AppError(Error):
code = AppErrorCode(description="The error code.", required=True)
permissions = NonNullList(
PermissionEnum,
description="List of permissions which causes the error.",
required=False,
)

Sorting and filtering​

To allow sorting the queryset you must create the SortInputObjectType and corresponding sorting enum, and it should go to the dedicated sorters.py file in the given app module in API.

class AppSortField(graphene.Enum):
NAME = ["name", "pk"]
CREATION_DATE = ["created", "name", "pk"]

@property
def description(self):
if self.name in AppSortField.__enum__._member_names_:
sort_name = self.name.lower().replace("_", " ")
return f"Sort apps by {sort_name}."
raise ValueError("Unsupported enum value: %s" % self.value)


class AppSortingInput(SortInputObjectType):
class Meta:
sort_enum = AppSortField
type_name = "apps"
warning

Remember, the list with sorting order must have a unique field.

tip

Sometimes you would like to sort the data by some field that should be calculated and it isn't the model field. There is an option for that, you need to create the method whose name starts with qs_with followed by a sort field name in lowercase. The method should annotate the queryset to contain the new value. Look at the example:

class CollectionSortField(graphene.Enum):
NAME = ["name", "slug"]
PRODUCT_COUNT = ["product_count", "slug"]

@property
def description(self):
if self.name in CollectionSortField.__enum__._member_names_:
sort_name = self.name.lower().replace("_", " ")
return f"Sort collections by {sort_name}."
raise ValueError("Unsupported enum value: %s" % self.value)

@staticmethod
def qs_with_product_count(queryset: QuerySet, **_kwargs) -> QuerySet:
return queryset.annotate(product_count=Count("collectionproduct__id"))

Similar situation is for filtering, you need to create FilterInputObjectType and django FilterSet in dedicated filters.py file.

class AppFilterInput(FilterInputObjectType):
class Meta:
filterset_class = AppFilter


class AppFilter(django_filters.FilterSet):
type = EnumFilter(input_class=AppTypeEnum, method=filter_app_type)
search = django_filters.CharFilter(method=filter_app_search)
is_active = django_filters.BooleanFilter()

class Meta:
model = models.App
fields = ["search", "is_active"]

Next, you need to create CountableConnection for given query:

class AppCountableConnection(CountableConnection):
class Meta:
node = App

Then in the schema.py file, you need to change the corresponding query. It must be a FilterConnectionField with created connection field, sorting input must be used in sort_by argument, and filter input in the filter argument of the query:

class AppQueries(graphene.ObjectType):
apps = FilterConnectionField(
AppCountableConnection,
filter=AppFilterInput(description="Filtering options for apps."),
sort_by=AppSortingInput(description="Sort apps."),
description="List of the apps.",
)

How to handle breaking changes?​

  • provide compatibility layer by supporting two solutions for a specified time period and removing it in next major version
  • deprecate API fields instead of removing them

Git commit messages​

To speed up the review process and to keep the logs tidy, we recommend the following simple rules on how to write good commit messages:

Summary line​

  • It should contain less than 50 characters. It is best to make it short
  • Introduce what has changed, using imperatives: fix, add, modify, etc.

Description​

  • Add extra explanation if you feel it will help others to understand the summary content
  • If you want, use bullet points (each bullet beginning with a hyphen or an asterisk)
  • Avoid writing in one line. Use line breaks so the reader does not have to scroll horizontally
tip

To ease review, try to limit your commits to a single, self-contained issue. This will also help others to understand and manage them in the future.

For more information and tips on how to write good commit messages, see the GitHub guide.

Pull requests​

When you open the pull request remember to add a meaningful title and a good description. Describe what is changing, and what is the reason for doing that or what problem it fixes. If it resolves some GitHub issue, please link it. Wait for all actions to be performed, and if all is green, request saleor/core group for review.

Changelog​

We have a CHANGELOG.md file where we are keeping the info about the changes that were introduced. The file contains separate parts for every release, the new items should be put under the section Unreleased. This section in split into Breaking changes and Other changes. The changelog entry should consist the name of the PR, the PR number and name of the author, it may also contain an additional explanation. Here is an example:

- Add multichannel - #6242 by @exampleUser

What is considered as breaking changes?​

Here we go with the full list of changes that are considered by us as breaking:

  • deleting field from graphql schema / renaming the field name
  • deleting the field from webhook payload /Β changing the name of returned field
  • changing signatures of plugins functions (PluginsManager) - breaking only for existing plugins
  • adding new validation in mutations logic - it might breaks storefronts and apps
  • changing of API behavior even if the schema doesn't change - e.g. the type of field hasn't changed but the requirements for value has

Any change that is not specified there should go to Other changes section.

I'm adding the fix to just merged feature, should I add a changelog record for that?​

The changelog shouldn't reflect the history of building a feature, but it should provide the information for other developers that the actual feature was added. If the feature wasn't released, and you believe the PR number of the fix is really worth mentioning, the existing changelog entry should be extended:

- Add multichannel - #6242, #6250 by @exampleUser

If the fix is not so crucial, it shouldn't be mentioned separately.

If a feature was already released, you can add a separate changelog entry just to mention a regular bugfix.