Etiquetas

martes, 21 de febrero de 2017

NOTICIAS Y ARTICULOS

Nuevo método de seguridad en Internet más capaz de distinguir hombres y máquinas

Se trata de una generación de captchas basados en la capacidad humana para procesar imágenes


Investigadores de New Jersey Institute of Technology, en Estados Unidos, han desarrollado un nuevo método CAPTCHA basado en la capacidad humana para procesar imágenes que aparecen rápidamente en una animación. A ello se le suma un color de contraste para que sea aún más difícil de interpretar para los robots. El resultado es una prueba fácil de superar para una persona, que simplemente debe identificar el texto que aparece en un pequeño vídeo, pero complicada para una máquina que tiene que extraer el significado. Por Patricia Pérez


Los bots o robots de Internet campan a sus anchas por la red, y no siempre con fines positivos. Sin embargo, los diseñadores de seguridad web siguen teniendo una ventaja sobre aquellos programas automatizados que se hacen pasar por personas. De momento, existen habilidades humanas demasiado complejas para que un robot las imite. 

Explorar esas debilidades es el eje principal de un equipo de investigadores del New Jersey Institute of Technology (NJIT), en Estados Unidos, para lo que ha desarrollado un nuevo método de seguridad en Internet capaz de distinguir al hombre de la máquina.

El profesor de Ingeniería Eléctrica e Informática Nirwan Ansari y dos de sus antiguos alumnos, Amey Shevtekar y Christopher Neylan, han diseñado la que podría considerarse como la próxima generación de CAPTCHAs. Se denomina así por las siglas en inglés de Prueba de Turing pública y automática para diferenciar máquinas y humanos (Completely Automated Turing Test to tell Computers and Humans Apart), al sistema presente en infinidad de webs que somete al usuario de forma automática a una prueba que, por sus características, sólo puede ser superada con éxito por un humano.

Normalmente se trata de sencillas combinaciones de números y letras distorsionadas, pero el equipo del NJIT plantea reemplazarlas por animaciones de vídeo. Según explica en un comunicado, el nuevo método, patentadorecientemente, se basa en la capacidad humana para procesar imágenes que aparecen rápidamente en una animación. De hecho, el estándar en cine establece que un espectador es capaz de percibir 24 fotogramas por segundo, ya que el cerebro retiene durante aproximadamente cuatro centésimas de segundo cada imagen.

A ello se suma la tendencia humana a interpretar los colores de forma diferente, como pasó recientemente con la imagen viral de The dress y el debate sobre si era blanco a azul. Esto se debe a que el cerebro interpreta el color en función de su contexto, como por ejemplo los colores cercanos, la luz o las sombras. En definitiva, otro obstáculo adicional para los ordenadores.

Inteligencia visual y contraste

“Los CAPTCHAs estáticos actuales pueden ser fácilmente vulnerados, por lo que pretendíamos conseguir una prueba más sólida explotando nuestra compleja inteligencia visual”, explica Ansari. De esta forma, el nuevo método no funciona capturando un fotograma o combinándolos todos, sino que necesita de la capacidad humana única para conectar las imágenes.

A ello se le suma un color de contraste para que sea aún más difícil de interpretar para los robots. El resultado es una prueba fácil de superar para una persona, que simplemente debe identificar el texto que aparece en el pequeño vídeo, pero complicada para una máquina que tiene que extraer el significado.

Los investigadores mantienen además que la nueva prueba fue diseñada para simplificar el acceso a sitios web, así como otros puntos de acceso y transacciones vulnerables, y salvaguardarlos frente a ataques e intrusiones. "Con el fin de hacer frente a los atacantes más sofisticados, los CAPTCHAs son cada vez más difíciles de resolver para los humanos”, admite Ansari, de ahí que propongan un sistema “de texto simple y, por lo tanto, fácil de reconocer".

Desafío

Esta nueva tecnología CAPTCHA le ha valido a Ansari para conseguir su vigesimoquinta patente desde el año 2000, cuando recibió la primera por un algoritmo para controlar la congestión en interruptores ATM (interruptores a modo de transferencia asíncrona), con los que dar soporte a velocidades moderadas, por ejemplo de ADSL.

En los últimos años, el profesor se ha convertido en un destacado experto en “comunicaciones verdes", cuyo objetivo es transformar la infraestructura de comunicaciones de EEUU en otra más fiable y eficiente energéticamente.

"Irónicamente, los avances en la creación de redes tecnologías aumenta la rápida propagación de gusanos y bots, lo que agrava las amenazas a la integridad de la Internet", señala.

Mientras tanto, los mismos robots se vuelven cada vez más sofisticados, impulsados por profesionales motivados por incentivos financieros y el ciberterrorismo. “Nunca habrá un sistema perfecto, por lo que tendremos que continuar estando al día para jugar a policías y ladrones", añade Ansari.


5G: a revolution in the making - Part V - Creating your own network

Saturday, May 7 2016

There are many proposed architectures as we move from 4G to 5G. Basically all of them see at their core NFV and SDN. Credit: Netmanias
Since the 90ies, following the deployment of the Intelligent Network in the 80ies, Telecom Operators have looked at ways that could boost the flexibility of their resources, both for rapid service creation and deployment and for minimizing CAPEX through resource reuse. As the networks morphed into interconnected computers dominated by software it just made sense. The TINA initiative, Telecommunications Information Networking Architecture, was one example of a worldwide effort to exploit the softwarization of the network equipment.
Times were not sufficiently mature, partly because of technology (most of the network was not “software based”) partly because of the market still dominated by agreements between Telcos Manufacturers and Telcos Operators.  
Besides, at that time Operators (bound by contracts to Telcos Manufacturers) were clamming on their long standing architecture defending it from the IP-zation of the signaling and transport. Rather than embracing the IP architecture which will have resulted in a flattening of the networks they preferred to keep the hierarchical structure that ensured better control and transport quality assurance (IP was, and is a best effort approach…). They deployed ATM, Asynchronous Transfer Mode, a protocol (and related architecture) that could ensure Quality of Service –QoS-, something that the Best Effort IP could not. 
Nowadays the battle between ATM and IP is no more. IP won and it won because the reasons that would have sustained ATM, the guaranteed Quality of Service, were superseded by the tremendous growth in capacity of the network that resulted in IP communications that was, for most applications “good enough”.

Over the same 20 years period 1990-2010, the network became even more softwarized, including the add drop multiplexers, the bridges,  and it started to be populated by data centers (the mobile networks works thanks to the presence of data centers spread all over and interconnected, the HLR – Home Location Register- and the VLR – Visitor Location Register, basically telling who “owns” your phone –HLR- and on which network the phone is at any particular time –VLR). 
More than that.  Data centers, service centers and the terminals themselves created an ecosystem where software is in charge. Software can adapt the interconnection to the specificity of a gateway, the terminal does not need to be fabricated according to a Network Provider specs. It can run any software, including the one that will make the terminal compatible with that network.
This does not just applies to smartphones. It can apply to any device that needs to connect to the network, including vehicles. It requires processing power to run the software and a few MB to spare to host the software applications. Both are generally a no-issue, but in those cases where they are “too expensive” to be economically pursuable the devices (mostly sensors or tiny actuators where the power constrains are usually limiting the processing power) can hook on a low cost (from a performance and energy point of view) interconnection leaving to a controller the task of interconnecting with the network.
The flatter structure of the network, resulting from the inclusion of bridges, routers and switches integrated in the routers, lends itself to be managed in a much more dynamic way.
Say hello to NFV, Network Function Virtualization, and to SDN, Software Defined Networking (or Network). The idea is that each network equipment can actually be stripped to a minimal subset of functionalities, to interact with its hardware periphery, migrating more complex –management functionalities to the Cloud (NFV). In parallel, the orchestration of network elements can also occur in the Cloud, outside of the network (SDN).  As I have indicated in the previous posts in this series terminals are becoming network equipment, they just fall under a separate (private and basically unregulated) jurisdiction.
A crucial point of course is who is defining which are the functionalities to migrate and who can play the orchestration function. In an Internet architecture the “orchestration” is highly distributed with no central command post. Could this be the case for the SDN orchestrator?
Probably in the first steps the orchestrator(s) will be the turf of a few Network Operators or Service Providers, each one with a responsibility domain limited to the network resources owned.
However, in a second phase, that in a way has already started, such orchestration may start to become distributed. At the Network Operators level it makes sense to have their own orchestrator reaching out and negotiating the use of network resources belonging to other owners. That would result in the possibility to ensure End to End QoS, a holy grail for Operators since the time they lost control on the end to end network (with the advent of the internet and independent service providers). This is what Software Defined Network is: the possibility to harvest the resources needed to ensure the best (paid) QoS. At the same time, at the edges of the network, third parties will start to offer services, partly embedded in smart phones and other devices, that can help applications running on that device to make the most (in terms of performance and cost) from the available network as seen from the edges. Clearly this implies the selection of the access gateway (5G here we come) and the selection of network resources that are made visible to third parties. These latter may come as a second step, but my bet is that they will come since in the end it will provide a way for an Operator to better monetize its resources.
The orchestrator, be it within a specific network domain with a reach limited to the network own devices, or reaching out to several “federated” networks or be it external to the network and focused on a specific application, will basically create a “customized” network, something that was never heard before.
Author Roberto Saracco



Army chooses Iron Bow to switch-out old SONET and ATM networking with IP-based upgrade

ROCK ISLAND ARSENAL, Ill., 29 April 2016. U.S. Army communications experts are looking to Iron Bow Technologies in Chantilly, Va., to upgrade the Army's old SONET- and ATM-based telecommunications networking equipment in South Korea with modern Internet Protocol (IP) gear.
Officials of the Army Contracting Command in Alexandria, Va., announced a $10.4 million contract to Iron Bow this week to replace the Army's Asynchronous Transfer Mode (ATM) and Synchronous Optical Network (SONET) infrastructure at the Army's Camp Humphreys near Anjeong-ri and Pyeongtaek, South Korea.
Iron Bow experts will replace ATM and SONET equipment at Camp Humphreys with an Internet Protocol (IP)/Multiprotocol Label Switching (MPLS) equipment.
The Army Contracting Command-Alexandria awarded the contract on behalf of the Army Contracting Command at Rock Island Arsenal, Ill.
Iron Bow will integrate the new IP-based telecommunications equipment into the global U.S. Department of Defense Information Systems Network (DoDIN), which is operated and maintained by Defense Information Systems Agency (DISA).
The DoDIN is the core global enterprise network of the U.S. military, Army officials say. It comprises DOD-owned and -leased telecommunications networks, subsystems, and operations support.
The DoDIN carries transmission of voice, data, imagery, and all video at all security classification levels employing cyber-security measures to address known threats, officials say.
SONET and ATM have been in use since the 1980s, and have been in the process of being replaced by IP-based technologies over the past decade.
On this contract Iron Bow will do the work Chantilly, Va., and should be finished by April 2019. For more information contact Iron Bow Technologies online at www.ironbow.com, or the Army Contracting Command-Rock Island at www.acc.army.mil/contractingcenters/acc_ri.

No hay comentarios:

Publicar un comentario