• Understanding instructions

    Home » Forums » Outside the box » The Junk Drawer » Understanding instructions

    Author
    Topic
    #2313098

    I was always wondering if it is possible to create such sequence of 0s and 1s, that computer performs invalid instruction, cause it understands this sequence differently from your intention.

    Do you understand, what I mean?

    I want to append line of text (“Hello!”) to the ond of file, which goes
    111000111100001111110010110111000111110001111000100000000001111
    = 64 bit instruction

    But what if that same instruction
    111000111100001111110010110111000111110001111000100000000001111
    can mean something else too? Set yellow color on the first line, or something else?

    Can this be whats happening if your PC is malfunctioning?

    Dell Latitude 3420, Intel Core i7 @ 2.8 GHz, 16GB RAM, W10 22H2 Enterprise

    HAL3000, AMD Athlon 200GE @ 3,4 GHz, 8GB RAM, Fedora 29

    PRUSA i3 MK3S+

    Viewing 7 reply threads
    Author
    Replies
    • #2313115

      Doriel,

      It depends on the type of file and how the system interprets what it reads from that file.

      May the Forces of good computing be with you!

      RG

      PowerShell & VBA Rule!
      Computer Specs

      1 user thanked author for this post.
      • #2313116

        Thank you that is correct.
        So the same sequence of 0s and 1s can do different things depending on what function called/invoked it?

        Dell Latitude 3420, Intel Core i7 @ 2.8 GHz, 16GB RAM, W10 22H2 Enterprise

        HAL3000, AMD Athlon 200GE @ 3,4 GHz, 8GB RAM, Fedora 29

        PRUSA i3 MK3S+

    • #2313122

      111000111100001111110010110111000111110001111000100000000001111

      That’s binary code, and needs to be compiled into machine language in order for the CPU to be able to read and process it.  What programming language are you using?

      Always create a fresh drive image before making system changes/Windows updates; you may need to start over!
      We all have our own reasons for doing the things that we do with our systems; we don't need anyone's approval, and we don't all have to do the same things.
      We were all once "Average Users".

      • #2313124

        Exactly. CPU must be able to read and process it.
        But interpretation can vary, depending from where the binary code comes from. My question is probably hard to understand or its just bad question, but what tells the computer how to process this binary code? Or is the communication not realized in pure 0s and 1s?

        Im not talking about one programming laguage (python, c++, delphi), Im talking about computer language itself.

        Dell Latitude 3420, Intel Core i7 @ 2.8 GHz, 16GB RAM, W10 22H2 Enterprise

        HAL3000, AMD Athlon 200GE @ 3,4 GHz, 8GB RAM, Fedora 29

        PRUSA i3 MK3S+

        1 user thanked author for this post.
        • #2313218

          Timing is everything.
          A CPU has Fetch and Execute parts to its machine cycle(s).
          The “word” (binary digits) brought into the CPU during its Fetch cycle is interpreted as an instruction but if the same word is brought into the CPU during its execute cycle then those same bits are used as data instead. (The data may be an addend, subtrahend, divisor, etc.)
          What happens depends largely on the timing of the CPU’s Fetch and Execute cycles. Everything that happens essentially depends on the CPU’s clock.

          Image or Clone often! Backup, backup, backup, backup......
          - - - - -
          Home Built: Windows 10 Home 64-bit, AMD Athlon II X3 435 CPU, 16GB RAM, ASUSTeK M4A89GTD-PRO/USB3 (AM3) motherboard, 512GB SanDisk SSD, 3 TB WD HDD, 1024MB ATI AMD RADEON HD 6450 video, ASUS VE278 (1920x1080) display, ATAPI iHAS224 Optical Drive, integrated Realtek HD Audio

          • This reply was modified 4 years, 5 months ago by RockE.
          1 user thanked author for this post.
    • #2313204

      --Joe

      1 user thanked author for this post.
    • #2313214

      The days of programming microprocessors…

      Each processor has a set of boot instructions, a set of interrupt instructions and a set of user instructions.

      The boot sector deals with what happens when you switch on/reset.

      The interrupt sector deals with the essential housekeeping, such as reading the time, reading the mouse/inputs, writing to the screen/outputs.  After each interrupt (to deal with all of those issues), the processor hands back to the user instructions.

      The user instructions are written by you in the format of commands.  Assembler uses codes like JNE and LDA.  These commands are converted into machine code (usually viewed in hexadecimal) which, ultimately is the 0s and 1s to which you refer.  Your code generates the array, reads in the data (also 0s and 1s), performs the calculations  on the data and outputs the end result (0s and 1s) to the screen memory (or transistors connected to the switching circuits in the real world).  Your program will also have to handle everything else that is going on and fail gracefully and predictably when something goes wrong.  There is no difference between data and code.  If your program tells the processor to execute the data(!), the processor will try and do so.  It will then fall over and everyone will blame you…

      It was fun but I don’t want to go back there!

    • #2313233

      But what if that same instruction
      can mean something else too? Set yellow color on the first line, or something else?

      Can this be whats happening if your PC is malfunctioning?

      Actually, this kind of thing was not all that uncommon at one point… with serial terminals and the like. This was a natural consequence of transferring the control codes inline in the same serial data stream as the actual content.

      Setting your terminal type and character set wrong (for us non US-ASCII folks) could indeed cause the terminal to interpret regular text as control codes. (Color change is a control code in that context.)

      Also it was occasionally a real bother to enter such characters as ü, ô or Å, even if things worked “correctly”…

      So the same sequence of 0s and 1s can do different things depending on what function called/invoked it?

      Very much so. And also when called from the same function but under different settings.

      And in the old days, particularly skilled programmers sometimes took advantage of this to save space… and malware authors were known to include bad code as “data” and then get it run as code anyway, so nowadays we have attempts to isolate code and data – but then again for some applications code must be data. (Programming tools for example.)

      1 user thanked author for this post.
    • #2313272

      Nowadays, the easiest way to learn low level programming skills appears to be the Raspberry Pi.  I understand that it can run several high level languages but you can still get down to the machine code if you want to.

       

      Alternatively, find a dark room and lie down until the feeling goes away…

      2 users thanked author for this post.
    • #2313721

      Thank you all for helping me understand these fundamentals.
      What I thinks is important to stress is comment made by @RockE

      Timing is everything.
      A CPU has Fetch and Execute parts to its machine cycle(s).

      That explains a lot, I was aware about it, but I somehow ignored that information about CPU clock.

      Also article which @joep517 linked was good read. Its more deep, as it goes into assembler like instuctions, but it tells how instructions are understood and what the principle of adding or sum.

      @mn-
      I run into this every day in out TelNet consoles 🙂 If I configure wrong server language, special czech characters (ě,š,č,…) are displayed incorrectly.

      Dell Latitude 3420, Intel Core i7 @ 2.8 GHz, 16GB RAM, W10 22H2 Enterprise

      HAL3000, AMD Athlon 200GE @ 3,4 GHz, 8GB RAM, Fedora 29

      PRUSA i3 MK3S+

    • #2313806

      Also… many of the “usual fundamentals” actually aren’t so fundamental for computing, they’re just the “established” way that has managed to accumulate institutional baggage and economies of scale.

      There’s all kinds of additional reading if you care enough to dig into the differences between “Harvard” architecture and “Princeton” or “von Neumann” architecture… and then there are the other ways like asynchronous dataflow and transport-triggered architectures, all the way into wavefront processing…

      A CPU has Fetch and Execute parts to its machine cycle(s).
      The “word” (binary digits) brought into the CPU during its Fetch cycle is interpreted as an instruction but if the same word is brought into the CPU during its execute cycle then those same bits are used as data instead.

      That’s the von Neumann aka Princeton style. In contrast, Harvard architecture fetches instruction and data simultaneously but on different wires. Strictly pure Harvard architecture means using different storage devices for those even.

      Mainstream PC processors these days are actually various degrees of hybrids, what with typically separate L1 caches for instruction and data (Harvard style) but L2 and down being unified (Princeton style) and…

      1 user thanked author for this post.
    Viewing 7 reply threads
    Reply To: Understanding instructions

    You can use BBCodes to format your content.
    Your account can't use all available BBCodes, they will be stripped before saving.

    Your information: