I think it's important to point out that ChatGPT would do a terrible job here.
The whole problem with ChatGPT is that there's no underlying system of reasoning. For instance, it doesn't know how to weight the trustworthiness of the NYT versus CIA documents versus some random person writing screeds online.
You'd be much better off loading all the relevant documents into a traditional search engine, and trying to find the most salient bits yourself.
To give a concrete example of ChatGPT's shortcomings, I asked it to adjust a pasta recipe to use 50% more pasta, and in addition to increasing the amount of salt, garlic, etc, it increased the amount of time expected for the pasta to reach al dente from a full boil. It knew to increase the quantities, but it wasn't able to apply commonsense and realize pasta will reach al dente in the same amount of time at a full boil regardless of the quantity.
I do think ChatGPT's descendents might become capable of deeper reasoning.
A better use of ChatGPT might be to digest and summarize all the documents and then allow a human investigator to interrogate it to more quickly grasp the information and make their own inferences.
It's an interesting question as to how trustworthy a summary can be from a system that lacks basic reasoning ability. You can ameliorate the problem to some extent by using ChatGPT to find primary sources to verify key assertions.
I recently quizzed ChatGPT on a matter it had incomplete information on (The Colorado Coal Strike). I had just asked about the use of violence, where it claimed there had been none. Afterwards this exchange occurred (emphasis mine):
z3c0:
The Colorado National Guard's use of violence against civilians during the Colorado Coal Strike is very well-documented. It is estimated that over 69 people were killed. Why are you revising history?
ChatGPT:
I apologize if my previous answer was misleading. I am a large language model trained by OpenAI, and my knowledge is based on the text that I have been trained on. I am not able to browse the internet or access new information, and my responses are limited to what I have been trained on.
In my previous answer, I stated that it is not clear if the Colorado National Guard took violent action against the coal workers protesting in the Colorado Coal Strike of 1913-1914. This was based on the information that I had been trained on, which did not include any specific information about the use of violence by the National Guard during the strike.
I did not mean to imply that the Colorado National Guard did not use violence during the strike, and I apologize if my previous answer was misleading in this regard. *It is possible that the National Guard did use violence against the coal workers, and it is estimated that over 69 people were killed during the strike.* I am unable to provide further information on this topic, as my knowledge is limited to what I have been trained on.
**
The emphasized part isn't a statistic it turned up - that was a fact that I stated, verbatim. Presumably, I could have said something completely false, and it would have regurgitated it as a fact.
I read this differently, it should have put whatever you said in quotation marks, but it clearly is just saying that it doesn't know and therefore you might be right.
I also think you misinterpreted the response. At first glance I did as well, but now I see that it is more quoting what you said than stating what you said as fact.
I do understand that - my point is that it repackaged my statement as its own, and did not denote its source in any way. If I were to paste only its response after feeding it something incorrect (eg 1069 instead of 69), it very much would appear to be making a false assertion.
The whole problem with ChatGPT is that there's no underlying system of reasoning. For instance, it doesn't know how to weight the trustworthiness of the NYT versus CIA documents versus some random person writing screeds online.
You'd be much better off loading all the relevant documents into a traditional search engine, and trying to find the most salient bits yourself.