Unsafe use of eval
In modeling_transnormer.py and utils.py, eval
is used to parse environment variables.
Instead of:
some_option = eval(os.environ.get("some_option", default="False"))
I would recommend using something like:
some_option = os.environ.get("some_option", default="False").lower() in ["true", "yes", "y", "1"]
Also, do_eval
in particular is evaluated on every forward
call for each NormLinearAttention
layer. Is there a particular reason for this, or should it instead be a global?
They are just poor codes, there should be a pr to improve these code.
The reason for eval in every forward is probably bc the authors has some testing code that can be made easy to switch between evaluation and training for their own convinence.
It shouldn't be this way though. In practice, we should replace the do_eval assignment with checks derived from user's previous call to pytorch model.eval(), ie. the model.training bool https://discuss.pytorch.org/t/check-if-model-is-eval-or-train/9395
Hello, thank you for your suggestion. We will optimize the code in the future. The "do_eval" is related to attention calculation, and we will also update this in the future.
Could we not just do is_cuda_available
and possibly check torch.version.hip
(for new MLIR Triton with ROCM support) in a general transformers.utils.import_utils.is_triton_available
that would benefit every model author relying on Triton?
As for the other evals, wouldn't it make more sense just to set them as hyperparameters and/or args/kwargs?