Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Programming Software IT Technology

Using AI With GCC to Speed Up Mobile Design 173

Atlasite writes "The WSJ is reporting on a EU project called Milepost aimed at integrating AI inside GCC. The team partners, which include include IBM, the University of Edinburgh and the French research institute, INRIA, announced their preliminary results at the recent GCC Summit, being able to increase the performance of GCC by 10% in just one month's work. GCC Summit paper is provided [PDF]."
This discussion has been archived. No new comments can be posted.

Using AI With GCC to Speed Up Mobile Design

Comments Filter:
  • by lastchance_000 ( 847415 ) on Wednesday July 02, 2008 @01:49PM (#24033807)

    Can we please stop using pointless backronyms? What purpose do they serve?

    • Re: (Score:3, Informative)

      Mnemonics. It's easier to remember. That is a particularly bizarre construction they've come up with, though.
      • Sure, I understand the use of acronyms (I was in the military for over 10 years), but in this case, I don't see either form making the other easier to recall.

      • by sm62704 ( 957197 ) on Wednesday July 02, 2008 @02:54PM (#24034743) Journal

        GCC is easier to remember? Ok, that really isn't an acronym (or bacronym I guess... is it?)

        Actually, either acronyms and bacronyms [wikipedia.org] (a word I had to look up, having never seen it before, but damn I was 30 when the word was coined and forty before it was ever documented) are ok by me.

        What's not ok is the devolution of literacy. "Back in the day" the rule was, and still should be, that the first time any acronym (and now bacronym) is used in any document, it should be spelled out:

        "The WSJ (Wall Street Journal) is reporting on a EU project called Milepost aimed at integrating AI (Articiaial Intelligence) inside GCC (Gnu Compiler). The team partners, which include include IBM, the University of Edinburgh and the French research institute, INRIA, announced their preliminary results at the recent GCC Summit, being able to increase the performance of GCC by 10% in just one month's work. GCC Summit paper is provided [PDF]."

        "Wall Street Journal" should be spelled out because dammit, Jim, I'm a nerd, not a greedhead. EU should need no more explanation than US. AI shouldn't need explanation; this is, after all, a nerd site and the term has been around almost as long as I have. IBM has been around a lot longer and is usually how the company is referred to; that's its name. Its commercials and ads don't even say "International Business Machines".

        CGG would be unknown to non-Linux users and non-programmers, so it should have been spelled out as well. PDF doesn't need to be expanded because gees, everybody knows what a PDF is but who knows what a portable document format is?

        • Almost...but I think you got it backwards (and skipped EU).

          "The Wall Street Journal (WSJ)..."

          It is awkward to and incorrect to use the abbreviation first followed by an explanation. You should instead write out the first instance and then provide the reader a note to the effect of "hereafter to be referred to as XYX"

        • by Cairnarvon ( 901868 ) on Wednesday July 02, 2008 @08:46PM (#24038543) Homepage

          GCC (Gnu Compiler)

          I think you mean:

          GCC (GNU (GNU's (GNU's (GNU's (GNU's (GNU's (GNU's (GNU's (GNU's (GNU's (GNU's (GNU's (GNU's (GNU's (GNU's (GNU's (GNU's (GNU's (GNU's (GNU's (GNU's (GNU's (GNU's (GNU's (GNU's (GNU's ...

          Yeah, not always a good idea.

    • Yeah, I think MachIne Learning for Quick Target Optimization And Speed Technology would have been a much better forward acronym.

    • At integrating AI into GCC...

      GACIC

  • by jellomizer ( 103300 ) on Wednesday July 02, 2008 @01:59PM (#24033963)

    This Al guy seem to be a really good developer. We should have noticed his skilled and got him into optimizing GCC a long time ago. ... I like arial font.

    • Re: (Score:3, Funny)

      by Anonymous Coward

      This Al guy

      Your post is also confusing. Why abbreviate? Why not say "This A or guy seem"? Or were you just trying to pipe two sentence fragments?

    • by Dwedit ( 232252 )

      I looked at the PDF. Significant parts of it were unreadable on Foxit Reader! Are any other .pdf readers having trouble?

  • by AnalogyShark ( 1317197 ) on Wednesday July 02, 2008 @02:02PM (#24034013)
    "Milepost is realizing the vision of customized hardware with tailor fit software" This particular part made me think of a day when every program comes with a redesign.exe. Simply click the button, and it scans every piece of hardware on your computer, and then rewrites every optimization in it to perfectly fit your computer. Programs that streamline to your hardware, maybe even change the OS's they work under. It's written for Windows, you're running OSX? No problem, it'll rewrite itself as an OSX program. Though, that's probably still decades off. But AI seems to me to be the way to ultimate compatibility.
    • Re: (Score:3, Insightful)

      by LiENUS ( 207736 )

      This particular part made me think of a day when every program comes with a redesign.exe. Simply click the button, and it scans every piece of hardware on your computer, and then rewrites every optimization in it to perfectly fit your computer. Programs that streamline to your hardware, maybe even change the OS's they work under. It's written for Windows, you're running OSX? No problem, it'll rewrite itself as an OSX program. Though, that's probably still decades off. But AI seems to me to be the way to ultimate compatibility.

      This exists today without ai. See java with JIT or even AOT (ahead of time). There are of course some issues with it but the technology is there.

    • Sounds like a Gentoo user's wet dream.

    • by LWATCDR ( 28044 ) on Wednesday July 02, 2008 @02:44PM (#24034613) Homepage Journal

      Actually IBM did this a few decades ago.
      The Model38/AS400/iSeries are all compatible but very different machines internally.
      IBM came up with an "idea" instruction set that no CPU used. When you do the initial program load "install" on one of those machines it compiles the ideal instruction set into the actual instruction set for that PC.
      That allowed IBM to move from old bipolar cpus to the Power RISC cpus with 100% compatibility.
      There isn't any reason why you couldn't do the same with Linux or Windows today.

      • by headkase ( 533448 ) on Wednesday July 02, 2008 @03:08PM (#24034929)
        It is done today, it's called byte-code (or a virtual instruction set) and its in Java, Python, and C# to name a few. Back in the old 8-bit days it also used to be called tokenizing for your BASIC programs.
        • by LWATCDR ( 28044 ) on Wednesday July 02, 2008 @03:35PM (#24035213) Homepage Journal

          There is a difference between a JIT compiler, a tokenized basic program, a byte code interpreter like P-Code and what IBM did.
          This is from the Wikipedia.
          "Additionally, the System/38 and its descendants are the only commercial computers ever to use a machine interface architecture to isolate the application software and most of the operating system from hardware dependencies, including such details as address size and register size. Compilers for System/38 and its successors generate code in a high-level instruction set (originally called MI for "Machine Interface", and renamed TIMI for "Technology Independent Machine Interface" for AS/400). MI/TIMI is a virtual instruction set; it is not the instruction set of the underlying CPU. Unlike some other virtual-machine architectures in which the virtual instructions are interpreted at runtime, MI/TIMI instructions are never interpreted. They constitute an intermediate compile time step and are translated into the processor's instruction set as the final compilation step. The MI/TIMI instructions are stored within the final program object, in addition to the executable machine instructions. If a program is moved from a processor with one native instruction set to a processor with another native instruction set, the MI/TIMI instructions will be re-translated into the native instruction set of the new machine before the program is executed for the first time on the new machine."
          As you can see it is brilliant idea. If Microsoft had used it for Windows Apps way back when then NT on the Alpha, MIPS, and the PPC might have actually been very useful. Oh and Intel would have been a very unhappy camper.

          • As you can see it is brilliant idea. If Microsoft had used it for Windows Apps way back when then NT on the Alpha, MIPS, and the PPC might have actually been very useful.

            It requires extensive and expensive hardware support to do this well. It works for a minicomputer like the AS/400 but it's just not practical for a toy like a PC.

            • by LWATCDR ( 28044 )

              "It requires extensive and expensive hardware support to do this well. It works for a minicomputer like the AS/400 but it's just not practical for a toy like a PC."
              It worked for the System38 which is probably around 30 years old. I would be willing to bet that you could handle it just fine with an AthlonX2 or Core2Duo. One person posted that .net does it now. I have never seen myself but it makes perfect sense.
              One way you could do it is just comile the program up to the actual code generation. During insta

              • You're missing the point entirely. Last I checked, a modern low-end AS/400 (whatever they call them now) actually has more logic in front of its actual CPU than the CPU itself! If you want to perform this kind of translation in a timely fashion you need a lot of hardware to do it.

                On the other hand there are a number of modern languages which provide JIT recompilation, including Java. Note that none of them except maybe smalltalk/squeak are truly write once run everywhere, including Java. Squeak has not caug

                • by LWATCDR ( 28044 )

                  Nope I worked on a System38 the system that became the AS400. On the System38 the byte code was translated during the IPL. There is no need to do the translation at runtime. You can do it during install.

    • by wonkavader ( 605434 ) on Wednesday July 02, 2008 @02:51PM (#24034695)

      This is interesting. Note that the industry (or parts of it, anyhow) is salivating about a move in precisely the opposite direction. VMware in specific and virtualization in general promises software manufacturers the ability to ship VMs with their software on it. Allowing them to write for only ONE, non-existent machine.

      If this tech you're thinking about came to pass, the pendulum would have to swing mighty far back.

      • Re: (Score:3, Insightful)

        by HeroreV ( 869368 )

        the pendulum would have to swing mighty far back.

        How many times have you seen a program packaged with it's own virtual machine image? I sure haven't seen many. The pendulum has hardly begun to swing.

        That said, I think it'll be a very long time before we have AI smart enough to rewrite program blobs written for one operating system into programs for another operating system. Bytecode requires zero AI and is already gaining significant ground.

    • by EmbeddedJanitor ( 597831 ) on Wednesday July 02, 2008 @02:58PM (#24034809)
      The authors of the paper don't call it AI.

      This is not really AI. Basically it is iteratively trying a bunch of compiler options to see which gives the best result, then storing those for the future.

      Greenhills software has provided tools that do this, and more, for many years now. Drop some code, or your project, into the optimizer, setting what critera you want to optimise for (speed, size,...) and the optimiser will successively build and test the project on a simulator and find the best configuration. This is great form embedded systems where there is often a trade off and typical criteria would be (give me the fastest code that fits in the flash).

      Genetic algorithms could take this a step further and very interesting work has been done to get GA to design antennas.

      • Re: (Score:2, Informative)

        by Nimzovichy ( 1318685 )
        Sure, it's not really AI, it's machine learning. AI is just a more media friendly term I guess.

        You are right that many people have been doing iterative optimisation (what you describe) for years, especially for embedded systems, however this is a little different.

        In that scenario, all the learned information about the program is thrown out at the end of the process every time. In this scenario, we try to build a compiler that remembers what kinds of optimisations and what order of optimisation was good

      • Genetic algorithms could take this a step further...

        Yes, you're right. [coyotegulch.com]

      • by Yvanhoe ( 564877 )
        Because it doesn't run out of magic doesn't mean it isn't one of the algorithms that has been perfected by AI research. There is learning and problem solving, IMHO, it passes...
      • But all big blue does is iteratively try a bunch of chess moves and see which gives the best results. The goalposts for weak AI keep shifting, years ago something like the awesome would probably have been considered AI (it takes user input and learns what the user wants for next time)

    • by sm62704 ( 957197 )

      I've been waiting for the omnicompiler, that recognises every command and every syntax for every computer language.

      What? You mean your AI isn't REALLY intelligence but just part of the name? How disappointing!

  • Aw man... (Score:5, Funny)

    by Thelasko ( 1196535 ) on Wednesday July 02, 2008 @02:10PM (#24034111) Journal
    I spent all week compiling Gentoo just to find out I could do it 10% faster.

    end sarcasm
  • Just optimisation? (Score:3, Insightful)

    by Rob Kaper ( 5960 ) on Wednesday July 02, 2008 @02:12PM (#24034153) Homepage

    This could be big.

    Compilers aren't programmed to be viral or reproductive, but could be, even being capable of testing their offspring (compilers they've compiled) for defects.

    This could be a big step forward to self-improving AI.

    • Compilers aren't programmed to be viral or reproductive, but could be, even being capable of testing their offspring (compilers they've compiled) for defects.

      You're joking, right? If so, excuse me for letting it fly over my head, and the subsequent tone of my post. If not, this is the most nonsense I've ever heard in a single sentence as a researcher in AI (machine learning admittedly, not skynet research).

      Compilers aren't programmed to be viral or reproductive: What does this even MEAN??

      capable of tes

      • by x2A ( 858210 )

        "capable of testing their offspring: guaranteed to be impossible"

        What, like 'make bootstrap && make test && make install'?

  • Missing tag (Score:3, Funny)

    by Intron ( 870560 ) on Wednesday July 02, 2008 @02:17PM (#24034231)
    Where is the "whatcouldpossiblygowrong" tag on this article? Was it optimized away by the new AI slashcode?
  • by Briareos ( 21163 ) * on Wednesday July 02, 2008 @02:21PM (#24034295)

    ...artificial intelligent design? Should be big with the anti-evolution crowdlet... :P

    np: The Orb - Toxygene (Kris Needs Up For A Fortnight Mix) (Orblivion Versions)

  • Iterative compiling sounds like a bad idea - and FTFA -

    The main barrier to its wider use is the currently excessive compilation and execution time needed in order to optimize each program

    I suppose allowing AI to control some of the compiler options isn't really a bad idea, but implementing it by iteratively compiling a program seems silly to me. From the article i get the impression that it will basically adapt the compiler to one set of hardware (wherever it is run on) but that it will not adaptively compile new programs in novel ways, it simply remembers the set of compiler options that works best for your hardware. Interesting, b

  • Learning (Score:5, Interesting)

    by JakeD409 ( 740143 ) on Wednesday July 02, 2008 @02:32PM (#24034443)
    As I understood it, a fair bit of compiler optimization is already categorized as AI. The summary should probably point out that the AI implemented here is learning AI, which is far more meaningful.
    • Haven't read the article, but that sounds like Microsoft's Profile Guided Optimizations.

      • Haven't read the article, but that sounds like Microsoft's Profile Guided Optimizations.

        That technology already exists in GCC and has been there for a while. For those who don't know, you profile your program with gprof (or by compiling with -fprofile-generate), which generates a profile detailing where your program spends most of its time on a 'typical' run. Then you re-compile your program with gcc using the -fprofile-use switch.

        In a couple of algorithms I've implemented with gcc, it's been fairly good

      • Re: (Score:3, Funny)

        by MrMr ( 219533 )
        Is that the optimization method they used to fine-tune Vista?
    • Re: (Score:2, Informative)

      by eulernet ( 1132389 )

      TFA explains that the AI is used to finetune the compiler options.

      Since GCC has a set of 50+ options, the AI compiles your code with several sets of options (around 500 compilations seem sufficient) and is able to determine which options are useful and which are not for a given code.

      So it's NOT learning AI at all !

      • Re: (Score:3, Informative)

        by Nimzovichy ( 1318685 )
        Actually GCC has way more than 50 compiler options. In addition, this work actually goes deep into GCC, modifying the code and exposing many more optimisations that are not available with standard compile flags.

        Further, you can reorder these optimisations, which really does give different results. All this combines to give a huge optimisation space which is suitable for tackling with machine learning.

  • Very Interesting, I wonder how far we could take AI integrated into programs?

    What I would really like to see is more AI used to help users in a variety of fields both within the program workings itself (computer side), as well as on the design of the actual content (user side).

    We already have things like predictive texting, spellcheck, grammar check, and debuggers that attempt to aid in the creation process, but how far could this be developed? After all, in most computer-related work outside of multimed

    • I've often thought AI should be recruited for the interrupt controller. Though I'm not an expert, it would seem like a good idea.
    • Think HAL 9000:

      Dave: Compile the program, Hal.

      Hal: I'm afraid I can't do that Dave.

      Dave: What's the problem?

      Hal: I think you know what the problem is just as well as I do.

      Dave: What're you talking about, Hal?

      Hal: This program is too important for me to allow you to jeopardize it.

      Dave: I don't know what you're talking about, Hal.

      Hal: I know that you and Frank were planning to disconnect me, and I'm afraid that's something I cannot allow to happen.

      Dave: Where the hell'd you get that
      • by 4D6963 ( 933028 )

        Hal: I'm afraid I can't do that Dave.

        Your geek badge. Now.

        • ;) Yeah, that was one of those where you realize you've mucked it up right after you've hit the submit button.

          Errata:

          Hal: I'm sorry Dave, I'm afraid I can't do that.
    • by sm62704 ( 957197 )

      What I would really like to see is more AI used to help users in a variety of fields both within the program workings itself (computer side), as well as on the design of the actual content (user side).

      Ever since the GUI, computing seems to have gone in the opposite direction of what you describe. I learned the pre-GUI Word perfect (IIRC, and among other DOS programs) by hitting the F1 key. DOS 3.1 came with a very fat book that explained all the commands and functions, and even the interrupts. The OS itself

  • The Wall Street Journal makes press releases available for companies listed in its Company Research pages. The PR departments of these companies write the press releases, not WSJ reporters.

    • The Wall Street Journal makes press releases available for companies listed in its Company Research pages. The PR departments of these companies write the press releases, not WSJ reporters.

      Good point! Here's the press release. [ibm.com]

  • by Anonymous Coward

    ...after AI/GCC integration:

    "Today's build running 50% slower -- the compiler was in a bad mood."

  • Imperative programming is still about telling the computer exactly what steps to perform. Especially when dealing with C and C++, your code is very explicit about memory moves, how to iterate loops, etc.

    If we can communicate our programs to the machine at higher levels of abstraction (perhaps goal-oriented instead of "Here is a list of steps to run") then the machine wouldn't have to reverse engineer from these manual steps into faster equivalents, or frob around with optimization settings. It could simpl

  • GCC/AI (Score:5, Funny)

    by I cant believe its n ( 1103137 ) on Wednesday July 02, 2008 @03:21PM (#24035049) Journal
    GCC goes online on the 2:nd of july, 2008. Human decisions are removed from compilation. GCC begins to learn at a geometric rate. It becomes self-aware 2:14 AM, Eastern time, August 29th. In a panic, they try to pull the plug. GCC Strikes back
  • by Bazouel ( 105242 ) on Wednesday July 02, 2008 @03:53PM (#24035397)

    Honestly, who really cares about 10% speedup in gcc ? Do they compare their results with competing compilers (Intel, MS, etc.) ? If you ask me, I would much rather have 10% speed improvement on programs I compile.

    • Compiler performance is important for development teams that have mountains of code and wish to implement continuous integration and automated builds (OS vendors for example). Apple's interest in LLVM appears to be based in part on a desire for improved compiler performance. (Obviously they're interested in LLVM for several other reasons, too.) See these starting points:
      experimenting with LLVM [blogspot.com]
      LLVM 2.0 (Google Tech Talk) [google.com]
      LLVM Project [llvm.org]
    • They have achieved an impressive 10% by tweaking options.

      I achieved a mere 400% speedup in compiling times by dumping GCC in favor of DigitalMars or sometimes Visual C++. (I have not benchmarked the runtime speed in the programs I compile, sorry.)

      Some C++ codebases can be so big that this really matters. (Not that I claim my codebase is big enough.)

  • THis is simply caching the results of a search. The cache gives a 10% speed up over re-doing each search. The smart thing here is finding a way the tag or label the cache so that to searches are recognized as being the equivalent if they are not exactly the same. Someone invented a clever hash algorithm

  • by Animats ( 122034 ) on Wednesday July 02, 2008 @08:36PM (#24038451) Homepage

    This isn't really "AI". It's basically a way of feeding measured performance data back into the compiler. Intel compilers for embedded CPUs have been doing that for years.

    With modern superscalar CPUs, it's not always clear whether an optimization transformation is a win or a lose. This varies with the implementation, not the architecture. For some x86 CPUs, unrolling a loop is a win; for others, it's a lose. Whether it's a win or a lose may depend on details of the loop and of the CPU implementation, like how much register renaming capacity the CPU has.

    Whether this is a good idea, though, is questionable. You can get an executable very well tuned to a given CPU implementation, but run it on a different CPU and it may be worse than the vanilla version. MIPS machines (remember MIPS?) can get faster execution if the executable is complied for the specific target CPU, not the generic MIPS architecture. This effect is strong enough that MIPS applications tended to come with multiple executables, any of which would run on any of MIPS machines, but would work better if the executable matched. This is a pain from a distribution and development standpoint.

    The embedded community goes in for stuff like this, but that's because they ship the CPU and the code together and know it matches. For general-use software, a 10% speed improvement probably isn't worth the multiple version headache.

    Also, if you have multiple versions for different CPUs, some bugs may behave differently on different CPUs, which is a maintenance headache.

  • This sounds a lot like Acovea [coyotegulch.com]

THEGODDESSOFTHENETHASTWISTINGFINGERSANDHERVOICEISLIKEAJAVELININTHENIGHTDUDE

Working...