I dropped a neural net
Oh no! I dropped an extremely valuable trading model and it fell apart into linear layers! I need to rebuild it before anyone notices, but I can't remember how these pieces go together, or how it was trained.
All I have left are the pieces of the model and some historical data. Can you help me figure out how to put it back together?
Luckily I still have the source code of the layers that the neural network is made of. They look like this:
class Block(nn.Module):
def __init__(self, in_dim: int, hidden_dim: int):
super().__init__()
self.inp = nn.Linear(in_dim, hidden_dim)
self.activation = nn.ReLU()
self.out = nn.Linear(hidden_dim, in_dim)
def forward(self, x):
residual = x
x = self.inp(x)
x = self.activation(x)
x = self.out(x)
return residual + x
class LastLayer(nn.Module):
def __init__(self, in_dim: int, out_dim: int):
super().__init__()
self.layer = nn.Linear(in_dim, out_dim)
def forward(self, x):
return self.layer(x)
The solution to this puzzle is a permutation. For each index from 0 to 96 (inclusive), give the index of the piece that is applied in that position.
If you figure it out, please let us know at archaeology@janestreet.com.
Solved by
- At least 44 people
Learn more at janestreet.com.