Doctrine adresse IP


 * http://dmca.cs.washington.edu/
 * « We were able to generate hundreds of real DMCA takedown notices for computers at the University of Washington that never downloaded nor shared any content whatsoever. »
 * « We were able to remotely generate complaints for nonsense devices including several printers. »


 * http://fabrice.lefessant.net/lefessant-hadopi-2009.pdf
 * «    La riposte graduée automatique repose sur une hypothèse, la capacité de l´utilisateur à sécuriser totalement son installation informatique domestique, dont l´impossibilité  est bien connue de tous les experts. »


 * http://www.canal-u.tv/canalu/producteurs/universite_de_tous_les_savoirs/dossier_programmes/les_conferences_de_l_annee_2000/la_societe_informatique_vers_la_societe_de_communication_et_vers_la_societe_de_surveillance/espionnage_piratage_risque_informatique_et_criminalite
 * http://www.ietf.org/rfc/rfc2401.txt (2.1 Goals/Objectives/Requirements/Problem Description)
 * http://www.eff.org/files/filenode/Capitol_v_Foster/amicus_in_support_of_fees.pdf (page 6: B. IP Addresses as Inadequate Identifiers)
 * « An IP address is not necessarily limited to a single computer or a single user. Often, a group of computers can share the same IP address, much like in a household, where multiple people can share a single telephone number. »
 * « Knowing only the address from which a message originated tells nothing about who in the building sent or received the message. Similarly, knowing only the IP address tells nothing about which computer was using the IP address at the time. »
 * « This is much like identifying the street address of a restaurant or other business and trying to use that information alone to identify a specific customer who might have been shopping or snacking at a particular time and date. »


 * Julian Green acquitté après avoir été accusé de posséder des images pornographiques sur son PC; il s'agissait d'un cheval de Troye. Le NYT indique que c'est le premier cas d'un accusé utilisant cette défense.
 * Security Engineering de Ross Anderson. Certains chapitres de la seconde édition sont disponibles online, la première l'est intégralement. Le problème c'est qu'il est exhaustif, mais il doit sûrement y avoir la "bonne" phrase.
 * Usurpation d'adresse IP Présentation d'Hervé Schauer - HSC - sur les techniques d'IP Spoofing (1999).
 * références de wikipedia:
 * Notice du CERT num CA-1995-01
 * Notice du CERT num CA-1996-21
 * cette note ne concerne pas notre propos, il s'agit d'une attaque par deni de service type "SYN-flood", et non pas d'une utilisation frauduleuse d'adresse IP.
 * "IP Spoofing: An Introduction" sur Security Focus
 * Spoofer Project une initiative du MIT pour mesurer la susceptibilité d'usurper l'adresse IP source des paquets sur Internet.


 * http://www.numerama.com/magazine/12503-Hadopi-interview-d-un-chasseur-d-adresses-IP.html - il peut être intéressante de contacter Frédéric Aidouni


 * qques elements sur le cote vague des moyens de sécurisations ici: http://www.itu.int/dms_pub/itu-d/opb/stg/D-STG-SG02.09.1.3-2006-MSW-F.doc

BOX 2.4 Possible Points of Vulnerability in Information Technology Systems and Networks An information technology system or network has many places where an operationally exploitable vulnerability can be found; in principle, a completely jus- tifiable trust in the system can be found only in environments that are completely under the control of the party who cares most about the security of the system. As discussed here, the environment consists of many things—all of which must be under the interested party’s control. The software is the most obvious set of vulnerabilities. In a running operating system or application, exploitable vulnerabilities may be present as the result of faulty program design or implementation, and viruses or worms may be introduced when the system or network comes in electronic contact with a hostile source. But there are more subtle paths by which vulnerabilities can be introduced as well. For example, compilers are used to generate object code from source code. The compiler itself must be secure, for it could introduce object code that subversively and subtly modifies the functionality represented in the source code. A particular sequence of instructions could exploit an obscure and poorly known characteristic of hardware functioning, which means that programmers well versed in minute behavioral details of the machine on which the code will be running could introduce functionality that would likely go undetected in any review of the code. The hardware constitutes another set of vulnerabilities, although less atten- tion is usually paid to hardware in this regard. Hardware includes microprocessors, microcontrollers, firmware, circuit boards, power supplies, peripherals such as printers or scanners, storage devices, and communications equipment such as network cards. On the one hand, hardware is physical, so tampering with these components requires physical access at some point in the hardware’s life cycle, which may be difficult to obtain. On the other hand, hardware is difficult to inspect, so hardware compromises are hard to detect. Consider, for example, that graph- ics display cards often have onboard processors and memory that can support an execution stream entirely separate from that running on a system’s “main” proces- sor. Also, peripheral devices, often with their own microprocessor controllers and programs, can engage in bidirectional communications with their hosts, providing a possible vector for outside influence. And, of course, many systems rely on a field-upgradable read-only memory (ROM) chip to support a boot sequence—and corrupted or compromised ROMs could prove harmful in many situations. The communications channels between the system or network and the “out- side” world present another set of vulnerabilities. In general, a system that does not interact with anyone is secure, but it is also largely useless. Thus, communications of some sort must be established, and those channels can be compromised—for example, by spoofing (an adversary pretends to be the “authorized” system), by jamming (an adversary denies access to anyone else), or by eavesdropping (an adversary obtains information intended to be confidential). Operators and users present a particularly challenging set of vulnerabilities. Both can be compromised through blackmail or extortion. Or, untrustworthy opera- tors and users can be planted as spies. But users can also be tricked into actions that compromise security. For example, in one recent exploit, a red team used inexpensive universal serial bus (USB) flash drives to penetrate an organization’s security. The red team scattered USB drives in parking lots, smoking areas, and other areas of high traffic. In addition to some innocuous images, each drive was preprogrammed with software that would collect passwords, log-ins, and machine- specific information from the user’s computer, and then e-mail the findings to the red team. Because many systems support an “auto-run” feature for insertable media (i.e., when the medium is inserted, the system automatically runs a program named “autorun.exe” on the medium) and the feature is often turned on, the red team was notified as soon as the drive was inserted. The result: 75 percent of the USB drives distributed were inserted into a computer. Given the holistic nature of security, it is also worth noting that vulnerabilities can be introduced at every point in the supply chain: that is, systems (and their components) can be attacked in design, development, testing, production, distri- bution, installation, configuration, maintenance, and operation. On the way to a customer, a set of CD-ROMs may be intercepted and a different set introduced in its place; extra functionality might be introduced during chip fabrication or moth- erboard assembly; a default security configuration might be left in an insecure state—and the list goes on. Given the dependence of security on all of these elements in the supply chain, it is not unreasonable to think of security as an emergent property of a system, as its architecture is implemented, its code instantiated, and as the system itself is embedded in a human and an organizational context. In practice, this means that the actual vulnerabilities that a system must resist are specific to that particular system embedded in its particular context. This fact should not discourage the development of generic building blocks for security that might be assembled in a system-specific way, but it does mean that an adversary could attack many pos- sible targets in its quest to compromise a system or a network. SOURCES: Information on compilers based on Ken Thompson, “Reflections on Trusting Trust,” Com- munications of the ACM, 27(8): 761-763, August 1984. See also P.A. Karger and R.R. Schell, “Thirty Years Later: Lessons from the Multics Security Evaluation,” pp. 119-126 in Proceedings of the 18th Annual Computer Security Applications Conference, December 9-13, 2002, Las Vegas, Nev.: IEEE Computer Society. Available at http://www.acsa-admin.org/2002/papers/ classic-multics.pdf. Information on USB drive: See Steve Stasiukonis, “Social Engineering, the USB Way,” Dark Reading, June 7, 2006. Available at http://www.darkreading.com/document.asp?doc_ id=95556&WT.svl=column1_1. Information on chip fabrication based on Defense Science Board, High Performance Microchip Supply, Department of Defense, February 2005; available at http://www.acq.osd. mil/dsb/reports/2005-02-HPMS_Report_Final.pdf.

Source: Toward a Safer and More Secure Cyberspace http://www.cyber.st.dhs.gov/docs/Toward_a_Safer_and_More_Secure_Cyberspace-Full_report.pdf