Depends on your interpretation of "AI". The current interpretation -- at least as applied by the popular press and laymen -- are really just deep learning with a bit of inferencing.
That is, you train the algorithms by giving them a crap load of examples with appropriate interpretations -- eg. picture of things and their names -- and the algorithms compare and contrast recognisable features and extract subsets of features that uniquely identify each of the trained subjects. Thereafter, give them a different picture of one of the training subjects and they will identify it as whichever of their training subjects it most resembles in terms of the number and similarity of features it manages to extract.
The inferencing is limited to essentially making assumptions. So for example, if the training pictures included both eyes (full face), and the new picture does not (profile), then the subset of features extracted from the new sample will not include the distance between the eyes which is in the training subsets. However, the length of the nose will be present, and it is possible to draw a reasonably accurate correlation between nose length and the between eyes distance, so the algorithm can infer some of the missing features and then assign identity based on the probability of a match, based on the correspondence of subsets, including the inferred unknowns.
Thus the algorithm may match an unknown subject against a sibling, mother/daughter, father son or similar relationship. However, they may equally match to a completely unrelated doppleganger.
My point here is that most of the algorithms currently being labeled as AI, do not have any real intelligence; just big memories and the ability to extrapolate from knowns to unknowns, but that extrapolation is just statistical without any intelligence. Train exclusively with ICn subjects and then query against it with IC~n subjects, and the algorithms will still offer matches where the human eye/brain/logic would immediately preclude them.
Apply these kind of algorithms to code generation, and its not hard to envisage training an algorithm with a bunch of data entry screens or shopping cart web pages -- with descriptions drawn from a very carefully worded (and human chosen and written) vocabulary -- and then having that algorithm accept new descriptions written by humans using that same vocabulary, and having it generate code that would approximate a solution.
Where (most if not all) AI still comes up short, is in the ability to innovate; to intuit a new solution to a problem.
It is easy to program a computer to take a brute force approach to testing possible solutions, and that can be greatly sped up by using randomisation of parameters and statistical instruments to move you towards a known goal; but there is no mechanism yet known that will find a solution to a problem that a human being hasn't solved first. And many, if not most programs, consist of not one problem, but a collection of problems each of which must be solved, in order to produce a final result.
In short, I foresee it being a long time before software starts writing original software.
That said, there are many, many ways in which software automation could be used to speed up, the development and testing of software, if only we weren't:
We are still like 18th century watchmaker's and cabinet maker's in the way we eulogise and covet our tools, working environments and methods.
Quite why so many of us think that a different syntax, manifesto, or stand-up meetings, will be the magic bullet that will transform the development process, is quite beyond me.
In reply to Re: How will Artificial Intelligence change the way we code?
by BrowserUk
in thread How will Artificial Intelligence change the way we code?
by LanX
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |