You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
While having a global Python interpreter choice works fine for small repositories, in large repositories there are potentially considerable code compatibility concerns across interpreter/virtualenv versions and migrations of language versions require supporting split states where some targets have converted and others may not have converted.
Today it's possible to set up multiple Python toolchains and multiple virtualenvs and use build configuration to choose how a given target or group are built/tested and that works fine.
A problem we've encountered is that users who wish to drop backwards compatibility have a hard time doing so in a sound way. Consider a service which has been able to adopt a new interpreter and would like to adopt new language features such as := or match. Code which has adopted a syntax extension feature like this is no longer backwards compatible to earlier interpreters which can cause problems both in linting and if there are targets configured for older interpreters which depend on targets that have adopted these features.
The idea I've been kicking around is to extend PyInfo with a requires_python attribute expressing the interpreter constraint range same as in setuptools. It should also be possible for at a minimum the py_binary rule implementation to perform a consistency check over all the PyInfo.requires_python values and ensure that the rule as defined contains only compatible sources. It would also be desirable to be able to restrict eg. rules_lint behavior so that 3.8 linting would be applied only to 3.8 compatible sources; the alternative being choking on 3.11 features.
Thoughts?
The text was updated successfully, but these errors were encountered:
While having a global Python interpreter choice works fine for small repositories, in large repositories there are potentially considerable code compatibility concerns across interpreter/virtualenv versions and migrations of language versions require supporting split states where some targets have converted and others may not have converted.
Today it's possible to set up multiple Python toolchains and multiple virtualenvs and use build configuration to choose how a given target or group are built/tested and that works fine.
A problem we've encountered is that users who wish to drop backwards compatibility have a hard time doing so in a sound way. Consider a service which has been able to adopt a new interpreter and would like to adopt new language features such as
:=
ormatch
. Code which has adopted a syntax extension feature like this is no longer backwards compatible to earlier interpreters which can cause problems both in linting and if there are targets configured for older interpreters which depend on targets that have adopted these features.The idea I've been kicking around is to extend
PyInfo
with arequires_python
attribute expressing the interpreter constraint range same as insetuptools
. It should also be possible for at a minimum thepy_binary
rule implementation to perform a consistency check over all thePyInfo.requires_python
values and ensure that the rule as defined contains only compatible sources. It would also be desirable to be able to restrict eg.rules_lint
behavior so that 3.8 linting would be applied only to 3.8 compatible sources; the alternative being choking on 3.11 features.Thoughts?
The text was updated successfully, but these errors were encountered: