You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
as feature extractor and keep getting AttributeError: 'SegformerFeatureExtractor' object has no attribute 'reduce_labels' still has no clear guide around
found this that said to repair the docs but I still haven't found the solution to do it by reading links and docs surrounding the links. Is it still a feature or should I move to other feature extractor?
Expected behavior
``AttributeError: 'SegformerFeatureExtractor' object has no attribute 'reduce_labels' ` solution should be
feature_extractor = AutoFeatureExtractor.from_pretrained("nvidia/segformer-b0-finetuned-ade-512-512", do_reduce_labels=True)
according to the link, but the problem persists.
Edit2:
Complete error message since by the time I wrote this I already try running it again for another chance. Here's the complete error code
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[158], line 1
----> 1 trainer.train()
2 trainer.push_to_hub()
File c:\Users\Lenovo\miniconda3\envs\pretrain-huggingface\Lib\site-packages\transformers\trainer.py:2155, in Trainer.train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs)
2152 try:
2153 # Disable progress bars when uploading models during checkpoints to avoid polluting stdout
2154 hf_hub_utils.disable_progress_bars()
-> 2155 return inner_training_loop(
2156 args=args,
2157 resume_from_checkpoint=resume_from_checkpoint,
2158 trial=trial,
2159 ignore_keys_for_eval=ignore_keys_for_eval,
2160 )
2161 finally:
2162 hf_hub_utils.enable_progress_bars()
File c:\Users\Lenovo\miniconda3\envs\pretrain-huggingface\Lib\site-packages\transformers\trainer.py:2589, in Trainer._inner_training_loop(self, batch_size, args, resume_from_checkpoint, trial, ignore_keys_for_eval)
2587 self.state.epoch = epoch + (step + 1 + steps_skipped) / steps_in_epoch
2588 self.control = self.callback_handler.on_step_end(args, self.state, self.control)
-> 2589 self._maybe_log_save_evaluate(
2590 tr_loss, grad_norm, model, trial, epoch, ignore_keys_for_eval, start_time
2591 )
2592 else:
2593 self.control = self.callback_handler.on_substep_end(args, self.state, self.control)
File c:\Users\Lenovo\miniconda3\envs\pretrain-huggingface\Lib\site-packages\transformers\trainer.py:3047, in Trainer._maybe_log_save_evaluate(self, tr_loss, grad_norm, model, trial, epoch, ignore_keys_for_eval, start_time)
3045 metrics = None
3046 if self.control.should_evaluate:
-> 3047 metrics = self._evaluate(trial, ignore_keys_for_eval)
3048 is_new_best_metric = self._determine_best_metric(metrics=metrics, trial=trial)
3050 if self.args.save_strategy == SaveStrategy.BEST:
File c:\Users\Lenovo\miniconda3\envs\pretrain-huggingface\Lib\site-packages\transformers\trainer.py:3001, in Trainer._evaluate(self, trial, ignore_keys_for_eval, skip_scheduler)
3000 def _evaluate(self, trial, ignore_keys_for_eval, skip_scheduler=False):
-> 3001 metrics = self.evaluate(ignore_keys=ignore_keys_for_eval)
3002 self._report_to_hp_search(trial, self.state.global_step, metrics)
3004 # Run delayed LR scheduler now that metrics are populated
File c:\Users\Lenovo\miniconda3\envs\pretrain-huggingface\Lib\site-packages\transformers\trainer.py:4051, in Trainer.evaluate(self, eval_dataset, ignore_keys, metric_key_prefix)
4048 start_time = time.time()
4050 eval_loop = self.prediction_loop if self.args.use_legacy_prediction_loop else self.evaluation_loop
-> 4051 output = eval_loop(
4052 eval_dataloader,
4053 description="Evaluation",
4054 # No point gathering the predictions if there are no metrics, otherwise we defer to
4055 # self.args.prediction_loss_only
4056 prediction_loss_only=True if self.compute_metrics is None else None,
4057 ignore_keys=ignore_keys,
4058 metric_key_prefix=metric_key_prefix,
4059 )
4061 total_batch_size = self.args.eval_batch_size * self.args.world_size
4062 if f"{metric_key_prefix}_jit_compilation_time" in output.metrics:
File c:\Users\Lenovo\miniconda3\envs\pretrain-huggingface\Lib\site-packages\transformers\trainer.py:4340, in Trainer.evaluation_loop(self, dataloader, description, prediction_loss_only, ignore_keys, metric_key_prefix)
4338 eval_set_kwargs["losses"] = all_losses if "loss" in args.include_for_metrics else None
4339 eval_set_kwargs["inputs"] = all_inputs if "inputs" in args.include_for_metrics else None
-> 4340 metrics = self.compute_metrics(
4341 EvalPrediction(predictions=all_preds, label_ids=all_labels, **eval_set_kwargs)
4342 )
4343 elif metrics is None:
4344 metrics = {}
Cell In[156], line 27, in compute_metrics(eval_pred)
19 pred_labels = logits_tensor.detach().cpu().numpy()
20 # currently using _compute instead of compute
21 # see this issue for more info: https://github.com/huggingface/evaluate/pull/328#issuecomment-1286866576
22 metrics = metric._compute(
23 predictions=pred_labels,
24 references=labels,
25 num_labels=num_labels,
26 ignore_index=0,
---> 27 reduce_labels=feature_extractor.reduce_labels,
28 )
30 # add per category metrics as individual key-value pairs
31 per_category_accuracy = metrics.pop("per_category_accuracy").tolist()
AttributeError: 'SegformerFeatureExtractor' object has no attribute 'reduce_labels'
The text was updated successfully, but these errors were encountered:
it seems that reduce_labels=image_processor.reduce_labels, is passed to the metric._compute method, whereas you should be passing reduce_labels=image_processor.do_reduce_labels.
Where did you get this code from? I'll make sure it gets updated
from transformers import SegformerImageProcessor
feature_extractor = SegformerImageProcessor.from_pretrained("nvidia/segformer-b0-finetuned-ade-512-512", reduce_labels=True)
I got the code with do_reduce_labels from this link, quote:
One additional thing to keep in mind is that one can initialize SegformerImageProcessor with do_reduce_labels set to True or False. In some datasets (like ADE20k), the 0 index is used in the annotated segmentation maps for background. However, ADE20k doesn’t include the “background” class in its 150 labels. Therefore, do_reduce_labels is used to reduce all labels by 1, and to make sure no loss is computed for the background class (i.e. it replaces 0 in the annotated maps by 255, which is the ignore_index of the loss function used by
System Info
Python 3.11.10, transformers 4.47.0
Who can help?
@stevhliu
Information
Tasks
examples
folder (such as GLUE/SQuAD, ...)Reproduction
Trying to train by using
`from transformers import AutoFeatureExtractor
feature_extractor = AutoFeatureExtractor.from_pretrained("nvidia/segformer-b0-finetuned-ade-512-512")`
as feature extractor and keep getting
AttributeError: 'SegformerFeatureExtractor' object has no attribute 'reduce_labels' still has no clear guide around
found this that said to repair the docs but I still haven't found the solution to do it by reading links and docs surrounding the links. Is it still a feature or should I move to other feature extractor?
Expected behavior
``AttributeError: 'SegformerFeatureExtractor' object has no attribute 'reduce_labels' ` solution should be
feature_extractor = AutoFeatureExtractor.from_pretrained("nvidia/segformer-b0-finetuned-ade-512-512", do_reduce_labels=True)
according to the link, but the problem persists.
Edit2:
Complete error message since by the time I wrote this I already try running it again for another chance. Here's the complete error code
The text was updated successfully, but these errors were encountered: