Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Address mypy errors #546

Merged
merged 7 commits into from
Sep 18, 2023
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
172 changes: 88 additions & 84 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
@@ -1,92 +1,96 @@
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.4.0
hooks:
- id: check-yaml
exclude: '\.*conda/.*'
- id: end-of-file-fixer
- id: trailing-whitespace
exclude: '\.txt$|\.tsv$'
- id: check-case-conflict
- id: check-merge-conflict
- id: detect-private-key
- id: debug-statements
- id: check-added-large-files
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.4.0
hooks:
- id: check-yaml
exclude: '\.*conda/.*'
- id: end-of-file-fixer
- id: trailing-whitespace
exclude: '\.txt$|\.tsv$'
- id: check-case-conflict
- id: check-merge-conflict
- id: detect-private-key
- id: debug-statements
- id: check-added-large-files

- repo: https://github.com/igorshubovych/markdownlint-cli
rev: v0.33.0
hooks:
- id: markdownlint
args: ["--config", ".markdownlint.json"]
- repo: https://github.com/igorshubovych/markdownlint-cli
rev: v0.33.0
hooks:
- id: markdownlint
args: ["--config", ".markdownlint.json"]

- repo: https://github.com/ambv/black
rev: 23.3.0
hooks:
- id: black
args: [.]
pass_filenames: false
always_run: true
exclude: ^metamist/
- repo: https://github.com/ambv/black
rev: 23.3.0
hooks:
- id: black
args: [.]
pass_filenames: false
always_run: true
exclude: ^metamist/

- repo: https://github.com/PyCQA/flake8
rev: "6.0.0"
hooks:
- id: flake8
additional_dependencies: [flake8-bugbear, flake8-quotes]
- repo: https://github.com/PyCQA/flake8
rev: "6.0.0"
hooks:
- id: flake8
additional_dependencies: [flake8-bugbear, flake8-quotes]

# Using system installation of pylint to support checking python module imports
- repo: local
hooks:
- id: pylint
name: pylint
entry: pylint
language: system
types: [python]
# Using system installation of pylint to support checking python module imports
- repo: local
hooks:
- id: pylint
name: pylint
entry: pylint
language: system
types: [python]

# mypy
- repo: https://github.com/pre-commit/mirrors-mypy
rev: v0.961
hooks:
- id: mypy
args:
[
--pretty,
--show-error-codes,
--no-strict-optional,
--ignore-missing-imports,
--install-types,
--non-interactive,
]
additional_dependencies:
- strawberry-graphql[fastapi]==0.138.1
# mypy
- repo: https://github.com/pre-commit/mirrors-mypy
rev: v1.5.1
hooks:
- id: mypy
args:
[
--pretty,
--show-error-codes,
--no-strict-optional,
--ignore-missing-imports,
--install-types,
--non-interactive,
--show-error-context,
--disable-error-code,
operator,
]
additional_dependencies:
- strawberry-graphql[fastapi]==0.206.0
- types-PyMySQL==1.1.0.1

- repo: https://github.com/pre-commit/mirrors-prettier
rev: "v3.0.0-alpha.4"
hooks:
- id: prettier
# I'm not exactly sure why it changes behaviour, but
# calling `cd web`, then calling `ls src/**/*.tsx`
# returns different results to `cd web && ls src/**/*.tsx`
# so just include both patterns here
entry: bash -c 'cd web && prettier --write --ignore-unknown --check src/*.{ts,tsx,css} src/**/*.{ts,tsx,css}'
- repo: https://github.com/pre-commit/mirrors-prettier
rev: "v3.0.0-alpha.4"
hooks:
- id: prettier
# I'm not exactly sure why it changes behaviour, but
# calling `cd web`, then calling `ls src/**/*.tsx`
# returns different results to `cd web && ls src/**/*.tsx`
# so just include both patterns here
entry: bash -c 'cd web && prettier --write --ignore-unknown --check src/*.{ts,tsx,css} src/**/*.{ts,tsx,css}'

- repo: https://github.com/pre-commit/mirrors-eslint
rev: "v8.33.0"
hooks:
- id: eslint
entry: bash -c 'cd web && eslint'
files: \.[jt]sx?$
types: [file]
additional_dependencies:
- eslint@^7.32.0
- eslint-config-airbnb@^19.0.4
- eslint-config-airbnb-base@^15.0.0
- eslint-config-airbnb-typescript@^17.0.0
- eslint-config-prettier@^8.6.0
- eslint-plugin-import@^2.26.0
- eslint-plugin-jsx-a11y@^6.6.1
- eslint-plugin-prettier@^4.2.1
- eslint-plugin-react@^7.31.11
- eslint-plugin-react-hooks@^4.6.0
- "@typescript-eslint/eslint-plugin@^5.48.0"
- "@typescript-eslint/parser@^5.48.0"
- repo: https://github.com/pre-commit/mirrors-eslint
rev: "v8.33.0"
hooks:
- id: eslint
entry: bash -c 'cd web && eslint'
files: \.[jt]sx?$
types: [file]
additional_dependencies:
- eslint@^7.32.0
- eslint-config-airbnb@^19.0.4
- eslint-config-airbnb-base@^15.0.0
- eslint-config-airbnb-typescript@^17.0.0
- eslint-config-prettier@^8.6.0
- eslint-plugin-import@^2.26.0
- eslint-plugin-jsx-a11y@^6.6.1
- eslint-plugin-prettier@^4.2.1
- eslint-plugin-react@^7.31.11
- eslint-plugin-react-hooks@^4.6.0
- "@typescript-eslint/eslint-plugin@^5.48.0"
- "@typescript-eslint/parser@^5.48.0"
86 changes: 42 additions & 44 deletions api/routes/analysis.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,12 +8,11 @@
from pydantic import BaseModel
from starlette.responses import StreamingResponse

from api.utils.dates import parse_date_only_string
from api.utils.db import (
get_projectless_db_connection,
Connection,
get_project_readonly_connection,
get_project_write_connection,
Connection,
get_projectless_db_connection,
)
from api.utils.export import ExportType
from db.python.layers.analysis import AnalysisLayer
Expand All @@ -22,20 +21,17 @@
from db.python.utils import GenericFilter
from models.enums import AnalysisStatus
from models.models.analysis import (
Analysis,
AnalysisInternal,
ProjectSizeModel,
SequencingGroupSizeModel,
DateSizeModel,
Analysis,
)
from models.utils.sample_id_format import (
sample_id_transform_to_raw_list,
sample_id_format,
)
from models.utils.sequencing_group_id_format import (
sequencing_group_id_format,
sequencing_group_id_format_list,
sequencing_group_id_transform_to_raw_list,
sequencing_group_id_format,
)

router = APIRouter(prefix='/analysis', tags=['analysis'])
Expand Down Expand Up @@ -326,40 +322,42 @@ async def get_sequencing_group_file_sizes(
"""
Get the per sample file size by type over the given projects and date range
"""
atable = AnalysisLayer(connection)

# Check access to projects
project_ids = None
pt = ProjectPermissionsTable(connection=connection.connection)
project_ids = await pt.get_project_ids_from_names_and_user(
connection.author, project_names, readonly=True
)

# Map from internal pids to project name
prj_name_map = dict(zip(project_ids, project_names))

# Convert dates
start = parse_date_only_string(start_date)
end = parse_date_only_string(end_date)

# Get results with internal ids as keys
results = await atable.get_sequencing_group_file_sizes(
project_ids=project_ids, start_date=start, end_date=end
)

# Convert to the correct output type, converting internal ids to external
fixed_pids: list[Any] = [
ProjectSizeModel(
project=prj_name_map[project_data['project']],
samples=[
SequencingGroupSizeModel(
sample=sample_id_format(s['sample']),
dates=[DateSizeModel(**d) for d in s['dates']],
)
for s in project_data['samples']
],
)
for project_data in results
]

return fixed_pids
raise NotImplementedError('This route is broken, and not properly implemented yet')
# atable = AnalysisLayer(connection)

# # Check access to projects
# project_ids = None
# pt = ProjectPermissionsTable(connection=connection.connection)
# project_ids = await pt.get_project_ids_from_names_and_user(
# connection.author, project_names, readonly=True
# )

# # Map from internal pids to project name
# prj_name_map = dict(zip(project_ids, project_names))

# # Convert dates
# start = parse_date_only_string(start_date)
# end = parse_date_only_string(end_date)

# # Get results with internal ids as keys
# results = await atable.get_sequencing_group_file_sizes(
# project_ids=project_ids, start_date=start, end_date=end
# )

# # Convert to the correct output type, converting internal ids to external
# fixed_pids: list[Any] = [
# ProjectSizeModel(
# project=prj_name_map[project_data['project']],
# samples=[
# SequencingGroupSizeModel(
# sample=sample_id_format(s['sample']),
# dates=[DateSizeModel(**d) for d in s['dates']],
# )
# for s in project_data['samples']
# ],
# )
# for project_data in results
# ]

# return fixed_pids
4 changes: 3 additions & 1 deletion db/python/connect.py
Original file line number Diff line number Diff line change
Expand Up @@ -121,7 +121,9 @@ def get_connection_string(self):
if self.port:
_host += f':{self.port}'

options = {} # {'min_size': self.min_pool_size, 'max_size': self.max_pool_size}
options: dict[
str, str | int
] = {} # {'min_size': self.min_pool_size, 'max_size': self.max_pool_size}
_options = '&'.join(f'{k}={v}' for k, v in options.items())

url = f'mysql://{u_p}@{_host}/{self.dbname}?{_options}'
Expand Down
10 changes: 6 additions & 4 deletions db/python/enum_tables/enums.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
import re
import abc
import re
from functools import lru_cache

from async_lru import alru_cache

from db.python.connect import DbBase
Expand Down Expand Up @@ -36,7 +37,8 @@ def _get_table_name(cls):
matcher = table_name_matcher.match(tn)
if not matcher:
raise ValueError(
f'The tablename {tn} is not valid (must match {table_name_matcher.pattern})'
f'The tablename {tn} is not valid (must match '
f'{table_name_matcher.pattern})'
)
return tn

Expand All @@ -47,9 +49,9 @@ async def get(self) -> list[str]:
"""
_query = f'SELECT DISTINCT name FROM {self._get_table_name()}'
rows = await self.connection.fetch_all(_query)
rows = [r['name'] for r in rows]
nrows = [r['name'] for r in rows]

return rows
return nrows

async def insert(self, value: str):
"""
Expand Down
14 changes: 7 additions & 7 deletions db/python/layers/seqr.py
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
# pylint: disable=unnecessary-lambda-assignment,too-many-locals,broad-exception-caught

import asyncio
import os
import re
import asyncio
import traceback
from collections import defaultdict
from datetime import datetime
Expand All @@ -15,29 +15,29 @@
from cpg_utils.cloud import get_google_identity_token

from api.settings import (
SEQR_URL,
SEQR_AUDIENCE,
SEQR_MAP_LOCATION,
SEQR_SLACK_NOTIFICATION_CHANNEL,
SEQR_URL,
get_slack_token,
)
from db.python.connect import Connection
from db.python.enum_tables import SequencingTypeTable
from db.python.layers.analysis import AnalysisLayer
from db.python.layers.base import BaseLayer
from db.python.layers.family import FamilyLayer
from db.python.layers.participant import ParticipantLayer
from db.python.layers.sequencing_group import SequencingGroupLayer
from db.python.tables.analysis import AnalysisFilter
from db.python.tables.project import ProjectPermissionsTable
from db.python.enum_tables import SequencingTypeTable
from db.python.utils import ProjectId, GenericFilter
from db.python.utils import GenericFilter, ProjectId
from models.enums import AnalysisStatus

# literally the most temporary thing ever, but for complete
# automation need to have sample inclusion / exclusion
from models.utils.sequencing_group_id_format import (
sequencing_group_id_format_list,
sequencing_group_id_format,
sequencing_group_id_format_list,
)

SEQUENCING_GROUPS_TO_IGNORE = {22735, 22739}
Expand Down Expand Up @@ -421,9 +421,9 @@ async def update_es_index(
)

if len(es_index_analyses) == 0:
return [f'No ES index to synchronise']
return ['No ES index to synchronise']

with AnyPath(fn_path).open('w+') as f:
with AnyPath(fn_path).open('w+') as f: # type: ignore
f.write('\n'.join(rows_to_write))

es_index = es_index_analyses[-1].output
Expand Down
Loading
Loading