This will definitely change though. As LLMs get better and develop new emergent properties, the gap between a human written story and an ai generated one will inevitably diminish.
Of course you will still need to provide a detailed and concrete input, so that the model can provide you with the most accurate result.
I feel like many people subscribe to a sort of human superiority complex, that is unjustified and which will quite certainly get stomped in the coming decades.
That is definitely not inevitable. It could very well be that we reach a point of diminishing returns soon.
I’m not convinced, that the simplistic construction of current generation machine learning can go much further than it already has without significant changes in strategy.
This will definitely change though. As LLMs get better and develop new emergent properties, the gap between a human written story and an ai generated one will inevitably diminish.
Of course you will still need to provide a detailed and concrete input, so that the model can provide you with the most accurate result.
I feel like many people subscribe to a sort of human superiority complex, that is unjustified and which will quite certainly get stomped in the coming decades.
That is definitely not inevitable. It could very well be that we reach a point of diminishing returns soon. I’m not convinced, that the simplistic construction of current generation machine learning can go much further than it already has without significant changes in strategy.
Could be, but the chances of that happening are next to zero and it’d be foolish to assume this is the peak.