-
Notifications
You must be signed in to change notification settings - Fork 188
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tutorial: tensor network basics #1193
base: master
Are you sure you want to change the base?
Conversation
👋 Hey, looks like you've updated some demos! 🐘 Don't forget to update the Please hide this comment once the field(s) are updated. Thanks! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great skeleton @EmilianoG-byte @Shiro-Raven !
Mainly minor suggestions, and one bigger one towards the last section for quantum circuit applications
demonstrations/tutorial_tn_basics.py
Outdated
- For this reason there exist heuristics for optimizing contraction path complexity. NP problem -> no perfect solution but great heuristics (https://arxiv.org/pdf/2002.01935). | ||
(optional) mention the idea behind some of them | ||
Link to quimb examples. | ||
- CODE: show this using np.einsum, timeit, and very large dimensions expecting to see a difference. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
timeit is a good idea, could also print out dimensions at different steps along a contraction path
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good idea!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Coming back to this, I printed the dimensions locally but I don't quite get what insight we get from this. Since we only have three tensors, the dimensions of intermediate tensors (AB) and (BC) are actually exactly the same. This also by construction to get the expected scaling in the computational cost.
I could come up with an example where the dimensions vary between contraction paths but probably would have to be more complex and not just a "triangle-like" tensor network
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah sounds like the example is too simple to show the desired property
I could come up with an example where the dimensions vary between contraction paths
💯
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
just to be sure, I guess also the timing is the same in the situation you describe rn?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well the timing varies and that is what I was trying to convey here, as the timing scales exactly as we would expect it from the complexity analysis I discussed some lines before. What would you like to show with the dimensions? That some paths result in tensors of larger intermediate size?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What would you like to show with the dimensions? That some paths result in tensors of larger intermediate size?
exactly :)
demonstrations/tutorial_tn_basics.py
Outdated
From tensor networks to quantum circuits: | ||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | ||
- Quantum circuits are a restricted subclass of tensor networks | ||
- show examples on https://arxiv.org/pdf/1912.10049 page 8 and 9 showing a quantum circuit for a bell state, defining each component as a tensor and show their contraction. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would suggest to do concrete examples of gates already above, e.g. single qubit gates = matrices, 2 qubit gates like CNOT = 4 leg tensor with 2 "in" and 2 "out" legs etc.
and then focus here on how e.g. an expectation value
there are some subtleties here:
- While in priciple a n-qubit state is a n-legged (=2^n sized) tensor,
$\psi_0$ is often a product state, so it is just n independent vectors. - H is often the sum of multiple operators. It is beyond the scope of this tutorial to go in depth, but there are ways to efficiently represent such a sum of tensors (e.g. MPOs and generalizations thereof). The "naive" thing to do for a sum of operators
$H = \sum_i h_i$ is to do separate evaluations for each$\langle \psi_0 | U^\dagger h_i U |\psi_0\rangle$ and sum them in the end. -
$\langle \psi_0 | U^\dagger$ and$U |\psi_0\rangle$ appear twice in the equation, if one can represent their result efficiently we can re-use it (not a given as it may result in a large 2^n tensor)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
While in priciple a n-qubit state is a n-legged (=2^n sized) tensor,
$\psi_0$
is often a product state, so it is just n independent vectors.
doesn't this go more into the direction of MPS (which is out of the scope of this tutorial)?
The "naive" thing to do for a sum of operators
If I am not mistaken, since we are assuming an exact contraction of the tensor network without approximating it as an MPS, this naive way is the only option, no? Or is it possible to use the Hamiltonian as an MPO and contract it with the state vector even tho this is not in MPS form?
if one can represent their result efficiently we can re-use it (not a given as it may result in a large 2^n tensor)
here again you mean efficiently as an MPS? Or do you mean this thing that Quimb does where it reuses contraction paths and other parts of the computation? (I couldn't find the link but I remember seeing something along these lines)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If I am not mistaken, since we are assuming an exact contraction of the tensor network without approximating it as an MPS, this naive way is the only option, no? Or is it possible to use the Hamiltonian as an MPO and contract it with the state vector even tho this is not in MPS form?
Update: Actually I thought more about it, and I think I was wrong on this. I cannot think of a real constraint of why just contracting a general tensor circuit (no MPS) with an MPO would not work.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
doesn't this go more into the direction of MPS (which is out of the scope of this tutorial)?
A product state is something different. You can interpret it as an MPS with trivial virtual bond dimension 1 but that is not the point
If I am not mistaken, since we are assuming an exact contraction of the tensor network without approximating it as an MPS, this naive way is the only option, no? Or is it possible to use the Hamiltonian as an MPO and contract it with the state vector even tho this is not in MPS form?
MPOs are typically exact and of course you can contract them with something that is not an MPS :)
here again you mean efficiently as an MPS? Or do you mean this thing that Quimb does where it reuses contraction paths and other parts of the computation? (I couldn't find the link but I remember seeing something along these lines)
The latter :) though I am actually not sure what https://github.com/PennyLaneAI/pennylane/blob/master/pennylane/devices/default_tensor.py#L799 does
Do you think is worth going into the details of boolean tensor networks when talking about the CNOT (for instance):
Up to you! I personally dont find that too important but if you like it feel free to include it :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have now added a section on tensor networks-quantum computing. Lmk what you think!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is the demo overall ready for review?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would think so @Qottmann :). I can read the whole thing between today and tomorrow to find typos but content-wise, I am happy with what it has.
I just have two TODO's on the script for myself which are to add some details to some figures.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great! Then I'd suggest to do a self-review first, and when it's ready for a full review please tag me in the PR :)
Thank you for opening this pull request. You can find the built site at this link. Deployment Info:
Note: It may take several minutes for updates to this pull request to be reflected on the deployed site. |
Initial draft from last week: DRAFT:
Very nice source with visual explanations that we can cite: https://www.math3ma.com/blog/matrices-as-tensor-network-diagrams |
modify size of diagrams
Hi @Qottmann ! I have finished checking the spelling and grammar so I believe the demo is now ready for a review :). If anything, I saw my last 3 drawings could use some improvement in the thickness of the lines, but I guess that’s a minor detail I can correct in the following days :D |
@EmilianoG-byte awesome :) you can expect a review at the latest by eow |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great first draft @EmilianoG-byte , congrats!
I left a bunch of nitpicky comments, in particular feel free to ignore those marked as "personal opinion" at your discretion.
The demo starts relatively slow in the beginning (which is great pedagogically) and then very quickly goes very fast (also understandable since that is in the nature of these highly complex topics).
I wonder if you can adjust the pace on either ends to make the experience smoother (I understand this is a very vague and hard-to-implement suggestion, but perhaps you get an idea). Perhaps it is also more a matter of the framing of the scope of the demo, making it clear in the beginning, end and thoughout what this demo is trying to achieve.
Perhaps as a good exercise for you to answer first and then use to translate into the draft: who is the target audience of this demo? what is the intention of writing this demo? and what should a reader take away from it?
I think the content itself it already great, it is just a matter of framing and scoping of the text :)
|
||
Part of the excitement surrounding tensor networks is due to their ability to represent complex data efficiently, which allows for — among other things — fast classical simulations. In addition, the diagrammatic language accompanying tensor networks makes working with them intuitive and suitable for describing a vast range of mathematical concepts, including quantum circuits. | ||
|
||
In this tutorial, we aim to provide an introduction to tensor networks with a focus on their applications in quantum computing. We choose to start by discussing the basic notions and definitions of tensors and tensor networks and work our way up to more advanced topics such as contraction paths and algorithms used to simulate quantum computers using tensor networks. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(personal opinion)
could be reduced or even removed
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should I make the changes you were referring about being more clear about the scope of the demo?
demonstrations/tutorial_tn_basics.py
Outdated
where each :math:`i_n` is an *index* of dimension :math:`d_n`—it takes integer values such that :math:`i_n \in [1, d_n]`—and the number of indices :math:`r` is known as the *rank* of the tensor. We say :math:`T` is a rank-:math:`r` tensor. | ||
|
||
.. tip:: | ||
Some authors refer to the indices of the tensors as their dimensions. In this tutorial, these two concepts will have different meanings, although related. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what is the meaning of dimension here? I'd either name the meaning of "dimension" directly here or leave the comment alltogether as it doesnt help at this stage
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have cleaned a bit the explanation on the index-dimension relation. Strictly speaking, I also think is not truly necessary, but perhaps could be helpful for beginners that can get confused with the terminology when coming from other tutorials :)
|
||
Does the last diagram seem familiar? It is because this is the representation of a single-qubit gate! We will see later in this tutorial the relation between quantum circuits and tensor networks. | ||
|
||
When working within the quantum computing notation, we adopt the convention that drawing the leg of a quantum state (i.e., a vector) to the right corresponds to a ket, i.e., a vector living in the Hilbert space, while drawing the legs to the left means they are a bra vector, i.e., living in the dual space. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is the differentiation between bra and kets via the direction they are pointing a common definition and actually necessary here? Instead of ascribing duals to left-pointing legs, you can also just indicate complex conjugation of
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would say for "general tensor networks" (like in condensed matter) this is the opposite convention. But since in quantum computing we read from left to right, you can see that all kets result in legs pointing right. Analogously the bra
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
when writing tensor diagrams, transposition becomes irrelevant as it is always clear from context. So the only thing you need to differentiate a bra from a ket is complex conjugation, which you can indicate with a star or bar over the tensor name :)
demonstrations/tutorial_tn_basics.py
Outdated
print("Rank-3 tensor: \n", tensor_rank3) | ||
|
||
############################################################################## | ||
# We can create a tensor of arbitrary rank following a similar procedure. This recursive approach illustrates how a rank-:math:`r` tensor can be seen as consisting of nested rank-:math:`(r-1)` tensors, represented in code by adding another level to the nested bracket structure ``[tensor_rank_r-1]``. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(personal opinion)
the [tensor_rank_r-1]
at the end of the sentence reads odd
# :align: center | ||
# :width: 45% | ||
# | ||
# In the right-hand side of the equality we have assumed a specific form for the U tensor in terms of local 2-qubit gates, which is often the case when dealing with real quantum hardware. In addition, it is common for the initial state to be a product state such as :math:`|0\rangle^{\otimes N}`, hence the form of the tensor in the diagram. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note that in the diagram depicted as it is right now, also the initial state is assumed to be a product state since all tensors are independent
a general input state would be one big rank-#qubits tensor
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
True! That's what I meant with the last sentence of this paragraph. Should I also write what you mentioned on:
a general input state would be one big rank-#qubits tensor
?
# | ||
# When the observable of interest is *local*, i.e., it acts on a few neighbouring qubits, we can calculate the expectation value by considering only the section of the quantum circuit within the *reverse light cone* (causal cone) of the observable :math:`O_l`. | ||
# | ||
# .. figure:: ../_static/demonstration_assets/tn_basics/12-expectation-local.png |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the convention you chose in the beginning with bras and kets is not doing you any favors here, the reversal of bra and ket is unnecessarily confusing the image imo
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see your point, but once again this comes down to the convention used in quantum computing. Should I then instead draw all kets point left and bras to the right? Then the diagram here would make more sense but then all the circuit diagrams would go against the usual convention 🤔
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When you compute expectation values of hermitian observables the order in what you call the bra and what the ket is actually unimportant. I'd leave it ambiguous and clear from context, but I see your point if you want to be more precise. Up to you! Just wanted to mention it as a possibility :)
# :align: center | ||
# :width: 70% | ||
# | ||
# Analogously as done with the expectation values, these contractions only involve the sections of the circuit within the light cone of **both** the projection with :math:`| \hat{x}_1 \rangle` and the contraction with the COPY tensor (diagonal computation). This procedure can be repeated recursively using the chain rule equation until we obtain the full bitstring :math:`(\hat{x}_1, \hat{x}_2, \hat{x}_3, \ldots, \hat{x}_N)`. To obtain more samples, we repeat the procedure from the beginning - this is what makes every sample memoryless or a perfect sample from the probability distribution. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what do you mean by this?
this is what makes every sample memoryless
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Memoryless here means that we don't use a markov chain, or information from previous samples to generate the next full sample. This is contrary to the other "non-perfect"/markov sampling algorithms like the one used back in the day in: https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.100.040501
I haven't read the old paper myself, but this difference was mentoined in the perfecting sampling source I mentioned in the text and https://tensornetwork.org/mps/algorithms/sampling/
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cool 👍 I'd either mention and explain it or leave it :)
"id": "emiliano_godinez" | ||
}, | ||
{ | ||
"id": "ahmed_darwish" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please swap this out like this:
"id": "emiliano_godinez" | |
}, | |
{ | |
"id": "ahmed_darwish" | |
"username": "emiliano" | |
}, | |
{ | |
"username": "ShiroRaven" |
It will pass the metadata checks then.
Also, @Shiro-Raven , you're still missing a picture and the Headline on your account.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is @Shiro-Raven still an author of the demo?
"dateOfPublication": "2024-08-06T00:00:00+00:00", | ||
"dateOfLastModification": "2024-08-06T00:00:00+00:00", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
tk
Before submitting
Please complete the following checklist when submitting a PR:
Ensure that your tutorial executes correctly, and conforms to the
guidelines specified in the README.
Remember to do a grammar check of the content you include.
All tutorials conform to
PEP8 standards.
To auto format files, simply
pip install black
, and thenrun
black -l 100 path/to/file.py
.When all the above are checked, delete everything above the dashed
line and fill in the pull request template.
Title:
Summary:
Relevant references:
[sc-66746]
If you are writing a demonstration, please answer these questions to facilitate the marketing process.
GOALS — Why are we working on this now?
Eg. Promote a new PL feature or show a PL implementation of a recent paper.
AUDIENCE — Who is this for?
Eg. Chemistry researchers, PL educators, beginners in quantum computing.
KEYWORDS — What words should be included in the marketing post?
Which of the following types of documentation is most similar to your file?
(more details here)