I suppose we could ask Sigmetnow to stop using ChatGPT. Or make posting AI-generated content a bannable offense. Of course that would be silly!
We may not like it, but using ChatGPT is convenient. I don't personally see the appeal of using it for article summaries specifically, but I trust that those who use it for that purpose, who even defend its use against all opposition, have their reasons. I don't understand it yet, but if there was no practical reason to use it, all that remained was blindly following some trend...
There are some things that I can do with ChatGPT that I couldn't do without it. For example, ChatGPT has all but replaced Google Translate for me, because it's just much better. Do I trust its translations? Well, yeah, a bit more than I trust Google... It's still shit sometimes, but that's what I expect from a machine translation. If I rely on it and get it wrong, I am not surprised, and that limits the use cases. I definitely don't want to become the journalist who makes wrong accusations based on some ChatGPT output.
I think the arguments against its allegedly high energy consumption don't hold up, and are straw men. There are technologies for which this is true, some quite despicable (see BitCoin), but not for LLMs. At least not yet.
For the moment, we are dealing with two issues that are hard to separate.
There's the classic. ChatGPT is a new technology, and like all new technologies, people initially don't trust it. I guess some of the opposition against it on this forum is due to such general fear, which may sound irrational, but shouldn't be laughed off.
But there's an even deeper problem that's new. People's words cannot be trusted to be their own.
When we discuss whether users should clearly mark AI-generated content, one issue is that there are some strong incentives against complying with that. Some people will see your post, read "ChatGPT", and stop reading. Or even worse, get angry and tell you about it. I guess that's generally not the desired outcome, and just not telling anyone that your post wasn't written by you entirely solves that problem.
I'd still argue that we should try to be open about things that we know matter to other people. If we want to collaborate on this forum, it is necessary that we can trust others to play by the rules. Not deliberately deceiving others is part of that.
So. Please, think about why using ChatGPT makes sense when you do it; if some of the content you post is not yours then mark it as such (that's obvious for quotes, maybe less obvious for AI-generated content); and if you find that people don't appreciate it, consider not using those tools anymore, rather than getting angry at other forum members.
Just my opinion. Yours may differ.