PPLM builds on top of other large transformer-based generative models (like GPT-2), where it enables finer-grained control of attributes of the generated language (e.g. gradually switching topic π± or sentiment π).
β οΈ π We had to turn off the PPLM machine as it was costly to host β try it locally using transformers, or contact us if you really need it as a hosted service. π β οΈ