2019
DOI: 10.1016/j.softx.2019.02.010
|View full text |Cite
|
Sign up to set email alerts
|

HexagDLy—Processing hexagonally sampled data with CNNs in PyTorch

Abstract: HexagDLy is a Python-library extending the PyTorch deep learning framework with convolution and pooling operations on hexagonal grids. It aims to ease the access to convolutional neural networks for applications that rely on hexagonally sampled data as, for example, commonly found in ground-based astroparticle physics experiments.

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
45
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
3
3
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 42 publications
(45 citation statements)
references
References 18 publications
0
45
0
Order By: Relevance
“…Turbo coding is further depicted as taking place between all four folded autoencoders (via an alpha complex, in blue), so instantiating further (hierarchical) turbo coding and thereby a larger shared latent space, so enabling predictive modeling of causes that achieve coherence via larger (and more slowly forming) modes of informational integration. This shared latent space is illustrated as containing an embedded graph neural network (GNN) Steppa and Holch, 2019), depicted as a hexagonal grid, as a means of integrating information via structured representations, where resulting predictions can then be propagated downward to individual folded autoencoders. Variable shading within the hexagonal grid-space of the GNN is meant to indicate degrees of recurrent activity-potentially implementing further turbo coding-and red arrows over this grid are meant to indicate sequences of activation, and potentially representations of trajectories through feature spaces.…”
Section: The Conscious Turbo-codementioning
confidence: 99%
“…Turbo coding is further depicted as taking place between all four folded autoencoders (via an alpha complex, in blue), so instantiating further (hierarchical) turbo coding and thereby a larger shared latent space, so enabling predictive modeling of causes that achieve coherence via larger (and more slowly forming) modes of informational integration. This shared latent space is illustrated as containing an embedded graph neural network (GNN) Steppa and Holch, 2019), depicted as a hexagonal grid, as a means of integrating information via structured representations, where resulting predictions can then be propagated downward to individual folded autoencoders. Variable shading within the hexagonal grid-space of the GNN is meant to indicate degrees of recurrent activity-potentially implementing further turbo coding-and red arrows over this grid are meant to indicate sequences of activation, and potentially representations of trajectories through feature spaces.…”
Section: The Conscious Turbo-codementioning
confidence: 99%
“…As the amount of research which combines principles from hexagonal image processing and machine learning is quite limited, this contribution also aims to overcome the downsides of currently developed hexagonal machine learning approaches. These include not only the development of hexagonal convolutional layers following Hoogeboom et al (2018) [25] and Steppa and Holch (2019) [26], but also the implementation of the underlying addressing scheme as well as the necessary processing steps for transformation and visualization.…”
Section: A Related Workmentioning
confidence: 99%
“…1) Convolutional layer: Current research by Steppa and Holch [26] suggest a hexagonal convolutional layer implementation where each hexagonal block of pixels is separated by a number of d /2 vertical slices with d denoting the horizontal diameter of the hexagonal pixel block. These hexagonal subblocks are then convolved using d convolutions with their respective kernels.…”
Section: A Hexagonal Layersmentioning
confidence: 99%
“…Turbo coding is further depicted as taking place between all four folded autoencoders (via an alpha complex, in blue), so instantiating further (hierarchical) turbo coding and thereby a larger shared latent space, so enabling predictive modeling of causes that achieve coherence via larger (and more slowly forming) modes of informational integration. This shared latent space is illustrated as containing an embedded graph neural network (GNN) (Liu et al, 2019;Steppa and Holch, 2019), depicted as a hexagonal grid, as a means of integrating information via structured representations, where resulting predictions can then be propagated downward to individual folded autoencoders. Variable shading within the hexagonal grid-space of the GNN is meant to indicate degrees of recurrent activitypotentially implementing further turbo coding-and red arrows over this grid are meant to indicate sequences of activation, and potentially representations of trajectories through feature spaces.…”
Section: Repeat Steps 3 and 4 Until Loopy Belief Propagation Convergesmentioning
confidence: 99%