Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Batched Data usage problem. #252

Closed
RuihongQiu opened this issue Apr 30, 2019 · 4 comments
Closed

Batched Data usage problem. #252

RuihongQiu opened this issue Apr 30, 2019 · 4 comments

Comments

@RuihongQiu
Copy link
Contributor

❓ Questions & Help

Hello. I came across a strange bug when manipulating the batched Data. I can print the data and assign it to a variable. But when I print the attribute of it, it failed. Though some other datas don't have this bug. Could you help me?

In[3]: data
Out[3]: Batch(batch=[176], edge_attr=[137], edge_index=[2, 137], in_degree_inv=[137], 
out_degree_inv=[137], sequence=[282], sequence_len=[100], x=[176, 1], y=[100])
In[4]: x = data.x
In[5]: print(x)
Traceback (most recent call last):
  File "/home/test/pt-1.0/lib/python3.6/site-packages/IPython/core/interactiveshell.py", line
3296, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
  File "<ipython-input-5-fc17d851ef81>", line 1, in <module>
    print(x)
  File "/home/test/pt-1.0/lib/python3.6/site-packages/torch/tensor.py", line 66, in __repr__
    return torch._tensor_str._str(self)
  File "/home/test/pt-1.0/lib/python3.6/site-packages/torch/_tensor_str.py", line 277, in _str
    tensor_str = _tensor_str(self, indent)
  File "/home/test/pt-1.0/lib/python3.6/site-packages/torch/_tensor_str.py", line 195, in 
_tensor_str
    formatter = _Formatter(get_summarized_data(self) if summarize else self)
  File "/home/test/pt-1.0/lib/python3.6/site-packages/torch/_tensor_str.py", line 80, in __init__
    value_str = '{}'.format(value)
  File "/home/test/pt-1.0/lib/python3.6/site-packages/torch/tensor.py", line 378, in __format__
    return self.item().__format__(format_spec)
RuntimeError: CUDA error: device-side assert triggered
@rusty1s
Copy link
Member

rusty1s commented Apr 30, 2019

To me this does not look like a PyG bug, but it is hard to say what may be the cause of this error. You can print data because it only requests data.x.size(). I'm pretty sure that print(x.size()) does not throw an error either, but accessing the real data does.

@RuihongQiu
Copy link
Contributor Author

Yeah! You are right. x.size() is also valid. So it is a data generation bug? Do you have any idea about how to debug? I have tested every single graph when generating the InMemoryDataset?

@rusty1s
Copy link
Member

rusty1s commented Apr 30, 2019

Is only x not printable? Can you run the code with CUDA_LAUNCH_BLOCKING=1?

@RuihongQiu
Copy link
Contributor Author

It is my code's bug with index out of range for getting embedding weight. Thank you for your advice!

rusty1s added a commit that referenced this issue Oct 9, 2023
This code belongs to the part of the whole distributed training for PyG.

`DistNeighborSampler` leverages the `NeighborSampler` class from
`pytorch_geometric` and the `neighbor_sample` function from `pyg-lib`.
However, due to the fact that in case of distributed training it is
required to synchronise the results between machines after each layer,
the part of the code responsible for sampling was implemented in python.

Added suport for the following sampling methods:
- node, edge, negative, disjoint, temporal

**TODOs:**

- [x] finish hetero part
- [x] subgraph sampling

**This PR should be merged together with other distributed PRs:**
pyg-lib: [#246](pyg-team/pyg-lib#246),
[#252](pyg-team/pyg-lib#252)
GraphStore\FeatureStore:
#8083
DistLoaders:
1.  #8079
2.  #8080
3.  #8085

---------

Co-authored-by: JakubPietrakIntel <[email protected]>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: ZhengHongming888 <[email protected]>
Co-authored-by: Jakub Pietrak <[email protected]>
Co-authored-by: Matthias Fey <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants