Malay Haldar
Dec 17, 2023

--

The network that trains on the booked listing logit_x, and the network that trains on the non-booked listing logit_y, are *both parameterized by theta*, i.e, they *share the same weights*. So at the end of training, you just have a single set of weights that you learnt.

During inference, you use those weights to infer the logit for a listing, and then use those logits to rank the listings.

--

--

Malay Haldar
Malay Haldar

Responses (1)