Brain Mind


Product Market
Stage 1:
The Infant Biocomputer
Stage 2: Stem Cells
Grafting of Neural Networks
Apply for

Machine Learning
Speech Learning
Vision Learning
Movement Engineers
Stem Cell

Brain Mind Robotics

"Silicon Valley" (Palo Alto - San Jose - San Francisco) California

We Are Building a Robotic Bio-Computer, Based on the Functional Neuroanatomy of the Human Brain, and Which Can Speak, Reason, Read, Understand Language, Feel Human Emotions, Think Creatively, Physically Interact with the Environment, and Experience "Consciousness."

We Are Hiring Engineers With Experience In Computer Science, Machine Learning, Robotics, Artificial Intelligence, and the Creation of Auditory and Visual Platforms For Speech, Object, and Face Recognition.

STAGE 1: The creation of a stand-alone unit which is programmed to "reflexively" respond to simple visual and auditory stimuli, and to "reflexively" move the "eyes", turn the "head", open and close the "mouth" and its "hands", and make sucking, chewing, swallowing, swimming, and leg-lifting stepping movements, and to raise its "arms" and touch its "mouth" and "face."

Machine Learning Engineers--Robotics--Qualifications

• Experience in Natural Language Understanding, Computer Vision, Machine Learning, Algorithmic Foundations of Optimization, Data Mining or Machine and Artificial Intelligence.
• Programming experience in one or more of the following: C, C++, scala, R, Python, Objective-C, Swift.
• Experience in developing and deploying high-quality performance code into dev, test, QA/QC, and prod environments.
• Experience on developing production level code on one or more of the following areas: statistical modeling, machine learning algorithms, data pipelines.
• An understanding of machine learning pipelines, neural networks, survival analysis, cluster analysis, forecasting, anomaly detection, association rules, cognitive computing, artificial intelligence, etc.
• Hands-on knowledge of software engineering practices and principles.
• Experience in building production level machine learning pipelines using open-source technologies (hadoop, spark, hive, kafka, storm).
• Ability to Prototype simple machine learning pipelines to quickly decide if an idea is promising or not.
• Experience using machine learning techniques for classification, parsing, and/or ranking.
• Experience in extracting signal from noise in large unstructured datasets a plus.
• Experience in iOS development, AI deep learning and advanced machine learning technologies, and its application to Robotics, IoT, IIOT
• Ability to Develop large-scale machine learning, deep learning platform, and frameworks. Optimize AI platform performance, algorithms to enable key AI solutions, projects and products.
• Ability to Evaluate, modify and maintain forks of open source deep learning frameworks, such as one of the followings, TensorFlow, Caffe, Cuda-Convnet, PaddlePaddle.
• Ability to Develop realistic AI/machine learning solutions.
• Proficiency in AI assisted data mining, social mining, information classification, Knowledge graph etc.
• Proficiency in AI assisted decision-making, optimization for robotics.

Auditory & Speech Learning Engineers--Robotics--Qualifications

• Extensive knowledge and working experience in AI, machine learning, statistical inference, speech recognition, natural language processing and computational cognitive science.
• At least 3-5 years US working experience in deep learning, machine learning, NLP, and auditory or speech recognition.
• Hands on experience with Deep Learning tools such as Caffe, TensorFlow.
• Familiarity with implementing algorithms on multi-core CPUs, clusters (MPI), GPUs.
• Experience in end-to-end AI based solutions, either cloud based or device intensive computations. • Excellent programming skills: C/C++, CUDA, and python.
• Experience with developing Visual and Face detection software/hardware, visual tracking, and image classification and recognition.
• Ability to develop Machine Learning Algorithms.
• Knowledge of Machine Learning toolkits such as Tensor-flow, Caffe, NLTK, THEANO.
• Grasp of Linux and database technologies.
• Programming experience in one or more of the following: C, C++, scala, R, Python, Objective-C, Swift • Develop realistic AI/machine learning solutions.
• AI assisted data mining, social mining, information classification, Knowledge graph etc.
• AI assisted decision-making, optimization for robotics.
• Ability to build and maintain integrations with voice platforms such as Amazon Alexa or Google Home.

Vision Learning and Visual Recognition Engineers--Robotics--Qualifications

• Experience developing computer vision algorithms, and machine learning and deep learning algorithms.
• Proficiency in image and video algorithm such as SIFT, SURF, STIP, SfM, SLAM, Multi-View Stereo (MVS).
• Experience with Artificial Intelligence with an emphasis in deep machine learning such as Convolutional Neural Networks, Recurrent Neural Networks.
• Proficient in mathematical and statistical optimization theory and techniques.
• Programming experience in one or more of the following: C, C++, scala, R, Python, MATLAB, Objective-C, Swift.
• Develop realistic AI/machine learning solutions.
• Strong background in linear algebra, geometry, etc.
• Experience in graphics programming, computer vision, machine learning, deep learning, OpenCV, SLAM, image matching, feature tracking, and object classification.
• Experience in working with data from Kinect , RGB cameras, depth cameras and point clouds.
• Experience fusing data from multiple sensors: imaging, audio and others.
• Experience operating on large data sets, data collection, data labeling, feature design, training, cross-validation, feature down-selection, algorithm assessment, feature definition, indexing, search, information extraction, and performance optimization.
• Ability to assist in the design of a visual system capable of detecting movement, shape, size, and recognizing objects, hands, and faces.

Robotics and Motor/Movement Engineers--Qualifications

• Experiencing in robotics, and the integration of machine vision and auditory detection with robotic-motor functioning (simple head, hand, arm, leg, eye, and oral movements).
• Experiencing in designing, developing and building advanced robotic sub systems & systems utilizing auditory and visual sensing for performing simple, reflexive, and rhythmic movements.
• Functional experience with 2 & 3D CAD & CAE systems.
• Experience in prototyping algorithms and working with robots
• Experience with deep learning tools (Caffe, Tensor Flow, Theano)
• Programming experience in one or more of the following: C, C++, scala, R, Python, MATLAB, Objective-C, Swift
• Develop and implement motion planning software and algorithms, including designing interfaces between subsystems
• Ability to implement inverse kinematics and control of robotic systems.
• Proficient in the Integration of sensor fusion techniques as well as machine vision, audition perception, and robotic movements.
• Experience with applied machine vision technologies including lighting, optics, electronics, and imaging algorithms is required.
• Ability to design and implement validation and calibration methodologies.
• PLC and HMI programming experience utilizing commercial HMI software with experience developing custom display elements and functionality from within that environment. Experience with.
• Experience writing complex VB.NET and/or C# applications a plus.
• Experience in automation systems, drives, sensors, servo controls.
• Experience with PLC, networking, device communications, integration, and design.
• Ability to assist in fashioning a simple robotic system which can "reflexively" (in response to simple visual, tactile, or auditory stimuli) turn its eyes and head, open its mouth, wave its hands, clench its fists, touch its face, and make stepping movements.