[PATCH v99,00/13] Add support for pure microMIPS kernel.

classic Classic list List threaded Threaded
17 messages Options
Reply | Threaded
Open this post in threaded view
|

[PATCH v99,00/13] Add support for pure microMIPS kernel.

Steven J. Hill-3
From: "Steven J. Hill" <[hidden email]>

This set of patches is to support building a pure microMIPS kernel
image using only instructions from the microMIPS ISA. The result is
a kernel binary size reduction of more than 20% and an increase in
the speed of execution due to the smaller and faster instructions.

Douglas Leung (1):
  MIPS: microMIPS: Add vdso support.

Steven J. Hill (12):
  MIPS: microMIPS: Add support for microMIPS instructions.
  MIPS: Whitespace clean-ups after microMIPS additions.
  MIPS: microMIPS: Floating point support for 16-bit instructions.
  MIPS: microMIPS: Add support for exception handling.
  MIPS: microMIPS: Support handling of delay slots.
  MIPS: microMIPS: Add unaligned access support.
  MIPS: microMIPS: Add configuration option for microMIPS kernel.
  MIPS: microMIPS: Work-around for assembler bug.
  MIPS: microMIPS: Optimise 'memset' core library function.
  MIPS: microMIPS: Optimise 'strncpy' core library function.
  MIPS: microMIPS: Optimise 'strlen' core library function.
  MIPS: microMIPS: Optimise 'strnlen' core library function.

 arch/mips/Kconfig                      |   11 +
 arch/mips/Makefile                     |    1 +
 arch/mips/configs/sead3micro_defconfig |  125 +++
 arch/mips/include/asm/asm.h            |    2 +
 arch/mips/include/asm/branch.h         |   33 +-
 arch/mips/include/asm/fpu_emulator.h   |    7 +
 arch/mips/include/asm/inst.h           |  858 +++++++++++++++++-
 arch/mips/include/asm/mipsregs.h       |   41 +-
 arch/mips/include/asm/stackframe.h     |   12 +-
 arch/mips/include/asm/uaccess.h        |   14 +-
 arch/mips/kernel/branch.c              |  183 +++-
 arch/mips/kernel/cpu-probe.c           |    3 +
 arch/mips/kernel/genex.S               |   74 +-
 arch/mips/kernel/proc.c                |    5 +
 arch/mips/kernel/process.c             |  101 +++
 arch/mips/kernel/scall32-o32.S         |    9 +
 arch/mips/kernel/signal.c              |    6 +
 arch/mips/kernel/smtc-asm.S            |    3 +
 arch/mips/kernel/traps.c               |  296 +++++--
 arch/mips/kernel/unaligned.c           | 1496 +++++++++++++++++++++++++++-----
 arch/mips/lib/memset.S                 |   84 +-
 arch/mips/lib/strlen_user.S            |    9 +-
 arch/mips/lib/strncpy_user.S           |   28 +-
 arch/mips/lib/strnlen_user.S           |    2 +-
 arch/mips/math-emu/cp1emu.c            |  766 ++++++++++++++--
 arch/mips/math-emu/dsemul.c            |   40 +-
 arch/mips/mm/tlbex.c                   |   21 +
 arch/mips/mm/uasm-micromips.c          |  194 +++++
 arch/mips/mm/uasm-mips.c               |  178 ++++
 arch/mips/mm/uasm.c                    |  211 +----
 arch/mips/mti-sead3/sead3-init.c       |   48 +
 31 files changed, 4159 insertions(+), 702 deletions(-)
 create mode 100644 arch/mips/configs/sead3micro_defconfig
 create mode 100644 arch/mips/mm/uasm-micromips.c
 create mode 100644 arch/mips/mm/uasm-mips.c

--
1.7.9.5


Reply | Threaded
Open this post in threaded view
|

[PATCH v99,01/13] MIPS: microMIPS: Add support for microMIPS instructions.

Steven J. Hill-3
From: "Steven J. Hill" <[hidden email]>

Signed-off-by: Steven J. Hill <[hidden email]>
---
 arch/mips/include/asm/inst.h  |  771 +++++++++++++++++++++++++++++++++++++++++
 arch/mips/mm/uasm-micromips.c |  194 +++++++++++
 arch/mips/mm/uasm-mips.c      |  178 ++++++++++
 arch/mips/mm/uasm.c           |  211 ++---------
 4 files changed, 1177 insertions(+), 177 deletions(-)
 create mode 100644 arch/mips/mm/uasm-micromips.c
 create mode 100644 arch/mips/mm/uasm-mips.c

diff --git a/arch/mips/include/asm/inst.h b/arch/mips/include/asm/inst.h
index ab84064..c76899f 100644
--- a/arch/mips/include/asm/inst.h
+++ b/arch/mips/include/asm/inst.h
@@ -7,6 +7,7 @@
  *
  * Copyright (C) 1996, 2000 by Ralf Baechle
  * Copyright (C) 2006 by Thiemo Seufer
+ * Copyright (C) 2012 MIPS Technologies, Inc.  All rights reserved.
  */
 #ifndef _ASM_INST_H
 #define _ASM_INST_H
@@ -267,6 +268,225 @@ struct b_format { /* BREAK and SYSCALL */
  unsigned int func:6;
 };
 
+struct fb_format { /* FPU branch format */
+ unsigned int opcode:6;
+ unsigned int bc:5;
+ unsigned int cc:3;
+ unsigned int flag:2;
+ unsigned int simmediate:16;
+};
+
+struct fp0_format {      /* FPU multipy and add format (MIPS32) */
+ unsigned int opcode:6;
+ unsigned int fmt:5;
+ unsigned int ft:5;
+ unsigned int fs:5;
+ unsigned int fd:5;
+ unsigned int func:6;
+};
+
+struct mm_fp0_format {      /* FPU multipy and add format (microMIPS) */
+ unsigned int opcode:6;
+ unsigned int ft:5;
+ unsigned int fs:5;
+ unsigned int fd:5;
+ unsigned int fmt:3;
+ unsigned int op:2;
+ unsigned int func:6;
+};
+
+struct fp1_format {      /* FPU mfc1 and cfc1 format (MIPS32) */
+ unsigned int opcode:6;
+ unsigned int op:5;
+ unsigned int rt:5;
+ unsigned int fs:5;
+ unsigned int fd:5;
+ unsigned int func:6;
+};
+
+struct mm_fp1_format {      /* FPU mfc1 and cfc1 format (microMIPS) */
+ unsigned int opcode:6;
+ unsigned int rt:5;
+ unsigned int fs:5;
+ unsigned int fmt:2;
+ unsigned int op:8;
+ unsigned int func:6;
+};
+
+struct mm_fp2_format {      /* FPU movt and movf format (microMIPS) */
+ unsigned int opcode:6;
+ unsigned int fd:5;
+ unsigned int fs:5;
+ unsigned int cc:3;
+ unsigned int zero:2;
+ unsigned int fmt:2;
+ unsigned int op:3;
+ unsigned int func:6;
+};
+
+struct mm_fp3_format {      /* FPU abs and neg format (microMIPS) */
+ unsigned int opcode:6;
+ unsigned int rt:5;
+ unsigned int fs:5;
+ unsigned int fmt:3;
+ unsigned int op:7;
+ unsigned int func:6;
+};
+
+struct mm_fp4_format {      /* FPU c.cond format (microMIPS) */
+ unsigned int opcode:6;
+ unsigned int rt:5;
+ unsigned int fs:5;
+ unsigned int cc:3;
+ unsigned int fmt:3;
+ unsigned int cond:4;
+ unsigned int func:6;
+};
+
+struct mm_fp5_format {      /* FPU lwxc1 and swxc1 format (microMIPS) */
+ unsigned int opcode:6;
+ unsigned int index:5;
+ unsigned int base:5;
+ unsigned int fd:5;
+ unsigned int op:5;
+ unsigned int func:6;
+};
+
+struct fp6_format { /* FPU madd and msub format (MIPS IV) */
+ unsigned int opcode:6;
+ unsigned int fr:5;
+ unsigned int ft:5;
+ unsigned int fs:5;
+ unsigned int fd:5;
+ unsigned int func:6;
+};
+
+struct mm_fp6_format { /* FPU madd and msub format (microMIPS) */
+ unsigned int opcode:6;
+ unsigned int ft:5;
+ unsigned int fs:5;
+ unsigned int fd:5;
+ unsigned int fr:5;
+ unsigned int func:6;
+};
+
+struct mm16b1_format { /* microMIPS 16-bit branch format */
+ unsigned int opcode:6;
+ unsigned int rs:3;
+ signed int simmediate:7;
+ unsigned int duplicate:16; /* a copy of the instr */
+};
+
+struct mm16b0_format { /* microMIPS 16-bit branch format */
+ unsigned int opcode:6;
+ signed int simmediate:10;
+ unsigned int duplicate:16; /* a copy of the instr */
+};
+
+struct mm_i_format { /* Immediate format (addi, lw, ...) */
+ unsigned int opcode:6;
+ unsigned int rt:5;
+ unsigned int rs:5;
+ signed int simmediate:16;
+};
+
+/*  MIPS16e */
+
+struct rr {
+ unsigned int opcode:5;
+ unsigned int rx:3;
+ unsigned int nd:1;
+ unsigned int l:1;
+ unsigned int ra:1;
+ unsigned int func:5;
+};
+
+struct jal {
+ unsigned int opcode:5;
+ unsigned int x:1;
+ unsigned int imm20_16:5;
+ signed int imm25_21:5;
+ /* unsigned int    imm20_15:0;  here is only first 16bits in first HW */
+};
+
+struct i64 {
+ unsigned int opcode:5;
+ unsigned int func:3;
+ unsigned int imm:8;
+};
+
+struct ri64 {
+ unsigned int opcode:5;
+ unsigned int func:3;
+ unsigned int ry:3;
+ unsigned int imm:5;
+};
+
+struct ri {
+ unsigned int opcode:5;
+ unsigned int rx:3;
+ unsigned int imm:8;
+};
+
+struct rri {
+ unsigned int opcode:5;
+ unsigned int rx:3;
+ unsigned int ry:3;
+ unsigned int imm:5;
+};
+
+struct i8 {
+ unsigned int opcode:5;
+ unsigned int func:3;
+ unsigned int imm:8;
+};
+
+struct mm_m_format {
+ unsigned int opcode:6;
+ unsigned int rd:5;
+ unsigned int base:5;
+ unsigned int func:4;
+ signed int simmediate:12;
+};
+
+struct mm_x_format {
+ unsigned int opcode:6;
+ unsigned int index:5;
+ unsigned int base:5;
+ unsigned int rd:5;
+ unsigned int func:11;
+};
+
+struct mm16_m_format {
+ unsigned int opcode:6;
+ unsigned int func:4;
+ unsigned int rlist:2;
+ unsigned int imm:4;
+ unsigned int duplicate:16; /* a copy of the instr */
+};
+
+struct mm16_rb_format {
+ unsigned int opcode:6;
+ unsigned int rt:3;
+ unsigned int base:3;
+ signed int simmediate:4;
+ unsigned int duplicate:16; /* a copy of the instr */
+};
+
+struct mm16_r5_format {
+ unsigned int opcode:6;
+ unsigned int rt:5;
+ signed int simmediate:5;
+ unsigned int duplicate:16; /* a copy of the instr */
+};
+
+struct mm16_r3_format {
+ unsigned int opcode:6;
+ unsigned int rt:3;
+ signed int simmediate:7;
+ unsigned int duplicate:16; /* a copy of the instr */
+};
+
 #elif defined(__MIPSEL__)
 
 struct j_format { /* Jump format */
@@ -340,6 +560,225 @@ struct b_format { /* BREAK and SYSCALL */
  unsigned int opcode:6;
 };
 
+struct fb_format { /* FPU branch format */
+ unsigned int simmediate:16;
+ unsigned int flag:2;
+ unsigned int cc:3;
+ unsigned int bc:5;
+ unsigned int opcode:6;
+};
+
+struct fp0_format { /* FPU multipy and add format (MIPS32) */
+ unsigned int func:6;
+ unsigned int fd:5;
+ unsigned int fs:5;
+ unsigned int ft:5;
+ unsigned int fmt:5;
+ unsigned int opcode:6;
+};
+
+struct mm_fp0_format { /* FPU multipy and add format (microMIPS) */
+ unsigned int func:6;
+ unsigned int op:2;
+ unsigned int fmt:3;
+ unsigned int fd:5;
+ unsigned int fs:5;
+ unsigned int ft:5;
+ unsigned int opcode:6;
+};
+
+struct fp1_format { /* FPU mfc1 and cfc1 format (MIPS32) */
+ unsigned int func:6;
+ unsigned int fd:5;
+ unsigned int fs:5;
+ unsigned int rt:5;
+ unsigned int op:5;
+ unsigned int opcode:6;
+};
+
+struct mm_fp1_format { /* FPU mfc1 and cfc1 format (microMIPS) */
+ unsigned int func:6;
+ unsigned int op:8;
+ unsigned int fmt:2;
+ unsigned int fs:5;
+ unsigned int rt:5;
+ unsigned int opcode:6;
+};
+
+struct mm_fp2_format { /* FPU movt and movf format (microMIPS) */
+ unsigned int func:6;
+ unsigned int op:3;
+ unsigned int fmt:2;
+ unsigned int zero:2;
+ unsigned int cc:3;
+ unsigned int fs:5;
+ unsigned int fd:5;
+ unsigned int opcode:6;
+};
+
+struct mm_fp3_format { /* FPU abs and neg format (microMIPS) */
+ unsigned int func:6;
+ unsigned int op:7;
+ unsigned int fmt:3;
+ unsigned int fs:5;
+ unsigned int rt:5;
+ unsigned int opcode:6;
+};
+
+struct mm_fp4_format { /* FPU c.cond format (microMIPS) */
+ unsigned int func:6;
+ unsigned int cond:4;
+ unsigned int fmt:3;
+ unsigned int cc:3;
+ unsigned int fs:5;
+ unsigned int rt:5;
+ unsigned int opcode:6;
+};
+
+struct mm_fp5_format { /* FPU lwxc1 and swxc1 format (microMIPS) */
+ unsigned int func:6;
+ unsigned int op:5;
+ unsigned int fd:5;
+ unsigned int base:5;
+ unsigned int index:5;
+ unsigned int opcode:6;
+};
+
+struct fp6_format { /* FPU madd and msub format (MIPS IV) */
+ unsigned int func:6;
+ unsigned int fd:5;
+ unsigned int fs:5;
+ unsigned int ft:5;
+ unsigned int fr:5;
+ unsigned int opcode:6;
+};
+
+struct mm_fp6_format { /* FPU madd and msub format (microMIPS) */
+ unsigned int func:6;
+ unsigned int fr:5;
+ unsigned int fd:5;
+ unsigned int fs:5;
+ unsigned int ft:5;
+ unsigned int opcode:6;
+};
+
+struct mm16b1_format { /* microMIPS 16-bit branch format */
+ unsigned int duplicate:16; /* a copy of the instr */
+ signed int simmediate:7;
+ unsigned int rs:3;
+ unsigned int opcode:6;
+};
+
+struct mm16b0_format { /* microMIPS 16-bit branch format */
+ unsigned int duplicate:16; /* a copy of the instr */
+ signed int simmediate:10;
+ unsigned int opcode:6;
+};
+
+struct mm_i_format { /* Immediate format */
+ signed int simmediate:16;
+ unsigned int rs:5;
+ unsigned int rt:5;
+ unsigned int opcode:6;
+};
+
+/*  MIPS16e */
+
+struct rr {
+ unsigned int func:5;
+ unsigned int ra:1;
+ unsigned int l:1;
+ unsigned int nd:1;
+ unsigned int rx:3;
+ unsigned int opcode:5;
+};
+
+struct jal {
+ /* unsigned int    imm20_15:0;  here is only first 16bits in first HW */
+ signed int imm25_21:5;
+ unsigned int imm20_16:5;
+ unsigned int x:1;
+ unsigned int opcode:5;
+};
+
+struct i64 {
+ unsigned int imm:8;
+ unsigned int func:3;
+ unsigned int opcode:5;
+};
+
+struct ri64 {
+ unsigned int imm:5;
+ unsigned int ry:3;
+ unsigned int func:3;
+ unsigned int opcode:5;
+};
+
+struct ri {
+ unsigned int imm:8;
+ unsigned int rx:3;
+ unsigned int opcode:5;
+};
+
+struct rri {
+ unsigned int imm:5;
+ unsigned int ry:3;
+ unsigned int rx:3;
+ unsigned int opcode:5;
+};
+
+struct i8 {
+ unsigned int imm:8;
+ unsigned int func:3;
+ unsigned int opcode:5;
+};
+
+struct mm_m_format {
+ signed int simmediate:12;
+ unsigned int func:4;
+ unsigned int base:5;
+ unsigned int rd:5;
+ unsigned int opcode:6;
+};
+
+struct mm_x_format {
+ unsigned int func:11;
+ unsigned int rd:5;
+ unsigned int base:5;
+ unsigned int index:5;
+ unsigned int opcode:6;
+};
+
+struct mm16_m_format {
+ unsigned int duplicate:16; /* a copy of the instr */
+ unsigned int imm:4;
+ unsigned int rlist:2;
+ unsigned int func:4;
+ unsigned int opcode:6;
+};
+
+struct mm16_rb_format {
+ unsigned int duplicate:16; /* a copy of the instr */
+ signed int simmediate:4;
+ unsigned int base:3;
+ unsigned int rt:3;
+ unsigned int opcode:6;
+};
+
+struct mm16_r5_format {
+ unsigned int duplicate:16; /* a copy of the instr */
+ signed int simmediate:5;
+ unsigned int rt:5;
+ unsigned int opcode:6;
+};
+
+struct mm16_r3_format {
+ unsigned int duplicate:16; /* a copy of the instr */
+ signed int simmediate:7;
+ unsigned int rt:3;
+ unsigned int opcode:6;
+};
+
 #else /* !defined (__MIPSEB__) && !defined (__MIPSEL__) */
 #error "MIPS but neither __MIPSEL__ nor __MIPSEB__?"
 #endif
@@ -356,6 +795,26 @@ union mips_instruction {
  struct f_format f_format;
  struct ma_format ma_format;
  struct b_format b_format;
+ struct mm16b0_format mm16b0_format;
+ struct mm16b1_format mm16b1_format;
+ struct mm_i_format mm_i_format;
+ struct fb_format fb_format;
+ struct fp0_format fp0_format;
+ struct fp1_format fp1_format;
+ struct fp6_format fp6_format;
+ struct mm_fp0_format mm_fp0_format;
+ struct mm_fp1_format mm_fp1_format;
+ struct mm_fp2_format mm_fp2_format;
+ struct mm_fp3_format mm_fp3_format;
+ struct mm_fp4_format mm_fp4_format;
+ struct mm_fp5_format mm_fp5_format;
+ struct mm_fp6_format mm_fp6_format;
+ struct mm_m_format mm_m_format;
+ struct mm_x_format mm_x_format;
+ struct mm16_m_format mm16_m_format;
+ struct mm16_rb_format mm16_rb_format;
+ struct mm16_r3_format mm16_r3_format;
+ struct mm16_r5_format mm16_r5_format;
 };
 
 /* HACHACHAHCAHC ...  */
@@ -418,4 +877,316 @@ union mips_instruction {
 
 typedef unsigned int mips_instruction;
 
+/* The following are for microMIPS mode */
+#define MM_16_OPCODE_SFT        10
+#define MM_NOP16                0x0c00
+#define MM_POOL32A_MINOR_MSK    0x3f
+#define MM_POOL32A_MINOR_SFT    0x6
+#define MIPS32_COND_FC          0x30
+
+/*
+ * Major opcodes; microMIPS mode.
+ */
+enum mm_major_op {
+ mm_pool32a_op, mm_pool16a_op, mm_lbu16_op, mm_move16_op,
+ mm_addi32_op, mm_lbu32_op, mm_sb32_op, mm_lb32_op,
+ mm_pool32b_op, mm_pool16b_op, mm_lhu16_op, mm_andi16_op,
+ mm_addiu32_op, mm_lhu32_op, mm_sh32_op, mm_lh32_op,
+ mm_pool32i_op, mm_pool16c_op, mm_lwsp16_op, mm_pool16d_op,
+ mm_ori32_op, mm_pool32f_op, mm_reserved1_op, mm_reserved2_op,
+ mm_pool32c_op, mm_lwgp16_op, mm_lw16_op, mm_pool16e_op,
+ mm_xori32_op, mm_jals32_op, mm_addiupc_op, mm_reserved3_op,
+ mm_reserved4_op, mm_pool16f_op, mm_sb16_op, mm_beqz16_op,
+ mm_slti32_op, mm_beq32_op, mm_swc132_op, mm_lwc132_op,
+ mm_reserved5_op, mm_reserved6_op, mm_sh16_op, mm_bnez16_op,
+ mm_sltiu32_op, mm_bne32_op, mm_sdc132_op, mm_ldc132_op,
+ mm_reserved7_op, mm_reserved8_op, mm_swsp16_op, mm_b16_op,
+ mm_andi32_op, mm_j32_op, mm_sd32_op, mm_ld32_op,
+ mm_reserved11_op, mm_reserved12_op, mm_sw16_op, mm_li16_op,
+ mm_jalx32_op, mm_jal32_op, mm_sw32_op, mm_lw32_op,
+};
+
+/*
+ * POOL32I minor opcodes.
+ */
+enum mm_32i_minor_op {
+ mm_bltz_op, mm_bltzal_op, mm_bgez_op, mm_bgezal_op,
+ mm_blez_op, mm_bnezc_op, mm_bgtz_op, mm_beqzc_op,
+ mm_tlti_op, mm_tgei_op, mm_tltiu_op, mm_tgeiu_op,
+ mm_tnei_op, mm_lui_op, mm_teqi_op, mm_reserved13_op,
+ mm_synci_op, mm_bltzals_op, mm_reserved14_op, mm_bgezals_op,
+ mm_bc2f_op, mm_bc2t_op, mm_reserved15_op, mm_reserved16_op,
+ mm_reserved17_op, mm_reserved18_op, mm_bposge64_op, mm_bposge32_op,
+ mm_bc1f_op, mm_bc1t_op, mm_reserved19_op, mm_reserved20_op,
+ mm_bc1any2f_op, mm_bc1any2t_op, mm_bc1any4f_op, mm_bc1any4t_op,
+};
+
+/*
+ * POOL32A minor opcodes.
+ */
+enum mm_32a_minor_op {
+ mm_sll32_op = 0x000,
+ mm_ins_op = 0x00c,
+ mm_ext_op = 0x02c,
+ mm_pool32axf_op = 0x03c,
+ mm_srl32_op = 0x040,
+ mm_sra_op = 0x080,
+ mm_rotr_op = 0x0c0,
+ mm_lwxs_op = 0x118,
+ mm_addu32_op = 0x150,
+ mm_subu32_op = 0x1d0,
+ mm_and_op = 0x250,
+ mm_or32_op = 0x290,
+ mm_xor32_op = 0x310,
+};
+
+/*
+ * POOL32B functions.
+ */
+enum mm_32b_func {
+ mm_lwc2_func = 0x0,
+ mm_lwp_func = 0x1,
+ mm_ldc2_func = 0x2,
+ mm_ldp_func = 0x4,
+ mm_lwm32_func = 0x5,
+ mm_cache_func = 0x6,
+ mm_ldm_func = 0x7,
+ mm_swc2_func = 0x8,
+ mm_swp_func = 0x9,
+ mm_sdc2_func = 0xa,
+ mm_sdp_func = 0xc,
+ mm_swm32_func = 0xd,
+ mm_sdm_func = 0xf,
+};
+
+/*
+ * POOL32C functions.
+ */
+enum mm_32c_func {
+ mm_pref_func = 0x2,
+ mm_ll_func = 0x3,
+ mm_swr_func = 0x9,
+ mm_sc_func = 0xb,
+ mm_lwu_func = 0xe,
+};
+
+/*
+ * POOL32AXF minor opcodes.
+ */
+enum mm_32axf_minor_op {
+ mm_mfc0_op = 0x003,
+ mm_mtc0_op = 0x00b,
+ mm_tlbp_op = 0x00d,
+ mm_jalr_op = 0x03c,
+ mm_tlbr_op = 0x04d,
+ mm_jalrhb_op = 0x07c,
+ mm_tlbwi_op = 0x08d,
+ mm_tlbwr_op = 0x0cd,
+ mm_jalrs_op = 0x13c,
+ mm_jalrshb_op = 0x17c,
+ mm_syscall_op = 0x22d,
+ mm_eret_op = 0x3cd,
+};
+
+/*
+ * POOL32F minor opcodes.
+ */
+enum mm_32f_minor_op {
+ mm_32f_00_op = 0x00,
+ mm_32f_01_op = 0x01,
+ mm_32f_02_op = 0x02,
+ mm_32f_10_op = 0x08,
+ mm_32f_11_op = 0x09,
+ mm_32f_12_op = 0x0a,
+ mm_32f_20_op = 0x10,
+ mm_32f_30_op = 0x18,
+ mm_32f_40_op = 0x20,
+ mm_32f_41_op = 0x21,
+ mm_32f_42_op = 0x22,
+ mm_32f_50_op = 0x28,
+ mm_32f_51_op = 0x29,
+ mm_32f_52_op = 0x2a,
+ mm_32f_60_op = 0x30,
+ mm_32f_70_op = 0x38,
+ mm_32f_73_op = 0x3b,
+ mm_32f_74_op = 0x3c,
+};
+
+/*
+ * POOL32F secondary minor opcodes.
+ */
+enum mm_32f_10_minor_op {
+ mm_lwxc1_op = 0x1,
+ mm_swxc1_op,
+ mm_ldxc1_op,
+ mm_sdxc1_op,
+ mm_luxc1_op,
+ mm_suxc1_op,
+};
+
+enum mm_32f_func {
+ mm_lwxc1_func = 0x048,
+ mm_swxc1_func = 0x088,
+ mm_ldxc1_func = 0x0c8,
+ mm_sdxc1_func = 0x108,
+};
+
+/*
+ * POOL32F secondary minor opcodes.
+ */
+enum mm_32f_40_minor_op {
+ mm_fmovf_op,
+ mm_fmovt_op,
+};
+
+/*
+ * POOL32F secondary minor opcodes.
+ */
+enum mm_32f_60_minor_op {
+ mm_fadd_op,
+ mm_fsub_op,
+ mm_fmul_op,
+ mm_fdiv_op,
+};
+
+/*
+ * POOL32F secondary minor opcodes.
+ */
+enum mm_32f_70_minor_op {
+ mm_fmovn_op,
+ mm_fmovz_op,
+};
+
+/*
+ * POOL32F secondary minor opcodes (POOL32FXF).
+ */
+enum mm_32f_73_minor_op {
+ mm_fmov0_op = 0x01,
+ mm_fcvtl_op = 0x04,
+ mm_movf0_op = 0x05,
+ mm_frsqrt_op = 0x08,
+ mm_ffloorl_op = 0x0c,
+ mm_fabs0_op = 0x0d,
+ mm_fcvtw_op = 0x24,
+ mm_movt0_op = 0x25,
+ mm_fsqrt_op = 0x28,
+ mm_ffloorw_op = 0x2c,
+ mm_fneg0_op = 0x2d,
+ mm_cfc1_op = 0x40,
+ mm_frecip_op = 0x48,
+ mm_fceill_op = 0x4c,
+ mm_fcvtd0_op = 0x4d,
+ mm_ctc1_op = 0x60,
+ mm_fceilw_op = 0x6c,
+ mm_fcvts0_op = 0x6d,
+ mm_mfc1_op = 0x80,
+ mm_fmov1_op = 0x81,
+ mm_movf1_op = 0x85,
+ mm_ftruncl_op = 0x8c,
+ mm_fabs1_op = 0x8d,
+ mm_mtc1_op = 0xa0,
+ mm_movt1_op = 0xa5,
+ mm_ftruncw_op = 0xac,
+ mm_fneg1_op = 0xad,
+ mm_froundl_op = 0xcc,
+ mm_fcvtd1_op = 0xcd,
+ mm_froundw_op = 0xec,
+ mm_fcvts1_op = 0xed,
+};
+
+/*
+ * POOL16C minor opcodes.
+ */
+enum mm_16c_minor_op {
+ mm_lwm16_op = 0x04,
+ mm_swm16_op = 0x05,
+ mm_jr16_op = 0x18,
+ mm_jrc_op = 0x1a,
+ mm_jalr16_op = 0x1c,
+ mm_jalrs16_op = 0x1e,
+};
+
+/*
+ * POOL16D minor opcodes.
+ */
+enum mm_16d_minor_op {
+ mm_addius5_func,
+ mm_addiusp_func,
+};
+
+struct decoded_instn {
+ mips_instruction insn;
+ mips_instruction next_insn;
+ int pc_inc;
+ int next_pc_inc;
+ int micro_mips_mode;
+};
+
+union mips16e_instruction {
+ unsigned int full:16;
+ struct rr rr;
+ struct jal jal;
+ struct i64 i64;
+ struct ri64 ri64;
+ struct ri ri;
+ struct rri rri;
+ struct i8 i8;
+};
+
+enum MIPS16e_ops {
+ MIPS16e_jal_op = 003,
+ MIPS16e_ld_op = 007,
+ MIPS16e_i8_op = 014,
+ MIPS16e_sd_op = 017,
+ MIPS16e_lb_op = 020,
+ MIPS16e_lh_op = 021,
+ MIPS16e_lwsp_op = 022,
+ MIPS16e_lw_op = 023,
+ MIPS16e_lbu_op = 024,
+ MIPS16e_lhu_op = 025,
+ MIPS16e_lwpc_op = 026,
+ MIPS16e_lwu_op = 027,
+ MIPS16e_sb_op = 030,
+ MIPS16e_sh_op = 031,
+ MIPS16e_swsp_op = 032,
+ MIPS16e_sw_op = 033,
+ MIPS16e_rr_op = 035,
+ MIPS16e_extend_op = 036,
+ MIPS16e_i64_op = 037,
+};
+
+enum MIPS16e_i64_func {
+ MIPS16e_ldsp_func,
+ MIPS16e_sdsp_func,
+ MIPS16e_sdrasp_func,
+ MIPS16e_dadjsp_func,
+ MIPS16e_ldpc_func,
+};
+
+enum MIPS16e_rr_func {
+ MIPS16e_jr_func,
+};
+
+enum MIPS6e_i8_func {
+ MIPS16e_swrasp_func = 02,
+};
+
+/*
+ * This functions returns 1 if the microMIPS instr is a 16 bit instr.
+ * Otherwise return 0.
+ */
+#define MIPS_ISA_MODE   01
+#define is16mode(regs)  (regs->cp0_epc & MIPS_ISA_MODE)
+
+static inline int mm_is16bit(u16 instr)
+{
+ /* take LS 3 bits */
+ u16 opcode_low = (instr >> MM_16_OPCODE_SFT) & 0x7;
+
+ if (opcode_low >= 1 && opcode_low <= 3)
+ return 1;
+ else
+ return 0;
+}
+
 #endif /* _ASM_INST_H */
diff --git a/arch/mips/mm/uasm-micromips.c b/arch/mips/mm/uasm-micromips.c
new file mode 100644
index 0000000..f2b834a
--- /dev/null
+++ b/arch/mips/mm/uasm-micromips.c
@@ -0,0 +1,194 @@
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License.  See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 2012 MIPS Technologies, Inc.  All rights reserved.
+ */
+
+#define RS_MASK 0x1f
+#define RS_SH 16
+#define RT_MASK 0x1f
+#define RT_SH 21
+#define SCIMM_MASK 0x3ff
+#define SCIMM_SH 16
+
+/* This macro sets the non-variable bits of an instruction. */
+#define M(a, b, c, d, e, f) \
+ ((a) << OP_SH \
+ | (b) << RT_SH \
+ | (c) << RS_SH \
+ | (d) << RD_SH \
+ | (e) << RE_SH \
+ | (f) << FUNC_SH)
+
+static struct insn insn_table[] __uasminitdata = {
+ { insn_addu, M(mm_pool32a_op, 0, 0, 0, 0, mm_addu32_op), RT | RS | RD },
+ { insn_addiu, M(mm_addiu32_op, 0, 0, 0, 0, 0), RT | RS | SIMM },
+ { insn_and, M(mm_pool32a_op, 0, 0, 0, 0, mm_and_op), RT | RS | RD },
+ { insn_andi, M(mm_andi32_op, 0, 0, 0, 0, 0), RT | RS | UIMM },
+ { insn_beq, M(mm_beq32_op, 0, 0, 0, 0, 0), RS | RT | BIMM },
+ { insn_beql, 0, 0 },
+ { insn_bgez, M(mm_pool32i_op, mm_bgez_op, 0, 0, 0, 0), RS | BIMM },
+ { insn_bgezl, 0, 0 },
+ { insn_bltz, M(mm_pool32i_op, mm_bltz_op, 0, 0, 0, 0), RS | BIMM },
+ { insn_bltzl, 0, 0 },
+ { insn_bne, M(mm_bne32_op, 0, 0, 0, 0, 0), RT | RS | BIMM },
+ { insn_cache, M(mm_pool32b_op, 0, 0, mm_cache_func, 0, 0), RT | RS | SIMM },
+ { insn_daddu, 0, 0 },
+ { insn_daddiu, 0, 0 },
+ { insn_dmfc0, 0, 0 },
+ { insn_dmtc0, 0, 0 },
+ { insn_dsll, 0, 0 },
+ { insn_dsll32, 0, 0 },
+ { insn_dsra, 0, 0 },
+ { insn_dsrl, 0, 0 },
+ { insn_dsrl32, 0, 0 },
+ { insn_drotr, 0, 0 },
+ { insn_drotr32, 0, 0 },
+ { insn_dsubu, 0, 0 },
+ { insn_eret, M(mm_pool32a_op, 0, 0, 0, mm_eret_op, mm_pool32axf_op), 0 },
+ { insn_ins, M(mm_pool32a_op, 0, 0, 0, 0, mm_ins_op), RT | RS | RD | RE },
+ { insn_ext, M(mm_pool32a_op, 0, 0, 0, 0, mm_ext_op), RT | RS | RD | RE },
+ { insn_j, M(mm_j32_op, 0, 0, 0, 0, 0), JIMM },
+ { insn_jal, M(mm_jal32_op, 0, 0, 0, 0, 0), JIMM },
+ { insn_jr, M(mm_pool32a_op, 0, 0, 0, mm_jalr_op, mm_pool32axf_op), RS },
+ { insn_ld, 0, 0 },
+ { insn_ll, M(mm_pool32c_op, 0, 0, (mm_ll_func << 1), 0, 0), RS | RT | SIMM },
+ { insn_lld, 0, 0 },
+ { insn_lui, M(mm_pool32i_op, mm_lui_op, 0, 0, 0, 0), RS | SIMM },
+ { insn_lw, M(mm_lw32_op, 0, 0, 0, 0, 0), RT | RS | SIMM },
+ { insn_mfc0, M(mm_pool32a_op, 0, 0, 0, mm_mfc0_op, mm_pool32axf_op), RT | RS | RD },
+ { insn_mtc0, M(mm_pool32a_op, 0, 0, 0, mm_mtc0_op, mm_pool32axf_op), RT | RS | RD },
+ { insn_or, M(mm_pool32a_op, 0, 0, 0, 0, mm_or32_op), RT | RS | RD },
+ { insn_ori, M(mm_ori32_op, 0, 0, 0, 0, 0), RT | RS | UIMM },
+ { insn_pref, M(mm_pool32c_op, 0, 0, (mm_pref_func << 1), 0, 0), RT | RS | SIMM },
+ { insn_rfe, 0, 0 },
+ { insn_sc, M(mm_pool32c_op, 0, 0, (mm_sc_func << 1), 0, 0), RT | RS | SIMM },
+ { insn_scd, 0, 0 },
+ { insn_sd, 0, 0 },
+ { insn_sll, M(mm_pool32a_op, 0, 0, 0, 0, mm_sll32_op), RT | RS | RD },
+ { insn_sra, M(mm_pool32a_op, 0, 0, 0, 0, mm_sra_op), RT | RS | RD },
+ { insn_srl, M(mm_pool32a_op, 0, 0, 0, 0, mm_srl32_op), RT | RS | RD },
+ { insn_rotr, M(mm_pool32a_op, 0, 0, 0, 0, mm_rotr_op), RT | RS | RD },
+ { insn_subu, M(mm_pool32a_op, 0, 0, 0, 0, mm_subu32_op), RT | RS | RD },
+ { insn_sw, M(mm_sw32_op, 0, 0, 0, 0, 0), RT | RS | SIMM },
+ { insn_tlbp, M(mm_pool32a_op, 0, 0, 0, mm_tlbp_op, mm_pool32axf_op), 0 },
+ { insn_tlbr, M(mm_pool32a_op, 0, 0, 0, mm_tlbr_op, mm_pool32axf_op), 0 },
+ { insn_tlbwi, M(mm_pool32a_op, 0, 0, 0, mm_tlbwi_op, mm_pool32axf_op), 0 },
+ { insn_tlbwr, M(mm_pool32a_op, 0, 0, 0, mm_tlbwr_op, mm_pool32axf_op), 0 },
+ { insn_xor, M(mm_pool32a_op, 0, 0, 0, 0, mm_xor32_op), RT | RS | RD },
+ { insn_xori, M(mm_xori32_op, 0, 0, 0, 0, 0), RT | RS | UIMM },
+ { insn_dins, 0, 0 },
+ { insn_dinsm, 0, 0 },
+ { insn_syscall, M(mm_pool32a_op, 0, 0, 0, mm_syscall_op, mm_pool32axf_op), SCIMM},
+ { insn_bbit0, 0, 0 },
+ { insn_bbit1, 0, 0 },
+ { insn_lwx, 0, 0 },
+ { insn_ldx, 0, 0 },
+ { insn_invalid, 0, 0 }
+};
+
+#undef M
+
+static inline __uasminit u32 build_bimm(s32 arg)
+{
+ if(arg > 0xffff || arg < -0x10000)
+ printk(KERN_WARNING "Micro-assembler field overflow\n");
+
+ if(arg & 0x3)
+ printk(KERN_WARNING "Invalid micro-assembler branch target\n");
+
+ return ((arg < 0) ? (1 << 15) : 0) | ((arg >> 1) & 0x7fff);
+}
+
+static inline __uasminit u32 build_jimm(u32 arg)
+{
+ if ((arg & ~(JIMM_MASK << 1)) - 1)
+ printk(KERN_WARNING "Micro-assembler field overflow\n");
+
+ return (arg >> 1) & JIMM_MASK;
+}
+
+/*
+ * The order of opcode arguments is implicitly left to right,
+ * starting with RS and ending with FUNC or IMM.
+ */
+static void __uasminit build_insn(u32 **buf, enum opcode opc, ...)
+{
+ struct insn *ip = NULL;
+ unsigned int i;
+ va_list ap;
+ u32 op;
+
+ for (i = 0; insn_table[i].opcode != insn_invalid; i++)
+ if (insn_table[i].opcode == opc) {
+ ip = &insn_table[i];
+ break;
+ }
+
+ if (!ip || (opc == insn_daddiu && r4k_daddiu_bug()))
+ panic("Unsupported Micro-assembler instruction %d", opc);
+
+ op = ip->match;
+ va_start(ap, opc);
+ if (ip->fields & RS) {
+ if (opc == insn_mfc0 || opc == insn_mtc0)
+ op |= build_rt(va_arg(ap, u32));
+ else
+ op |= build_rs(va_arg(ap, u32));
+ }
+ if (ip->fields & RT) {
+ if (opc == insn_mfc0 || opc == insn_mtc0)
+ op |= build_rs(va_arg(ap, u32));
+ else
+ op |= build_rt(va_arg(ap, u32));
+ }
+ if (ip->fields & RD)
+ op |= build_rd(va_arg(ap, u32));
+ if (ip->fields & RE)
+ op |= build_re(va_arg(ap, u32));
+ if (ip->fields & SIMM)
+ op |= build_simm(va_arg(ap, s32));
+ if (ip->fields & UIMM)
+ op |= build_uimm(va_arg(ap, u32));
+ if (ip->fields & BIMM)
+ op |= build_bimm(va_arg(ap, s32));
+ if (ip->fields & JIMM)
+ op |= build_jimm(va_arg(ap, u32));
+ if (ip->fields & FUNC)
+ op |= build_func(va_arg(ap, u32));
+ if (ip->fields & SET)
+ op |= build_set(va_arg(ap, u32));
+ if (ip->fields & SCIMM)
+ op |= build_scimm(va_arg(ap, u32));
+ va_end(ap);
+
+#ifdef CONFIG_CPU_LITTLE_ENDIAN
+ **buf = ((op & 0xffff) << 16) | (op >> 16);
+#else
+ **buf = op;
+#endif
+ (*buf)++;
+}
+
+static inline void __uasminit
+__resolve_relocs(struct uasm_reloc *rel, struct uasm_label *lab)
+{
+ long laddr = (long)lab->addr;
+ long raddr = (long)rel->addr;
+
+ switch (rel->type) {
+ case R_MIPS_PC16:
+#ifdef CONFIG_CPU_LITTLE_ENDIAN
+ *rel->addr |= (build_bimm(laddr - (raddr + 4)) << 16);
+#else
+ *rel->addr |= build_bimm(laddr - (raddr + 4));
+#endif
+ break;
+
+ default:
+ panic("Unsupported Micro-assembler relocation %d",
+      rel->type);
+ }
+}
diff --git a/arch/mips/mm/uasm-mips.c b/arch/mips/mm/uasm-mips.c
new file mode 100644
index 0000000..e86334b
--- /dev/null
+++ b/arch/mips/mm/uasm-mips.c
@@ -0,0 +1,178 @@
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License.  See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 2012 MIPS Technologies, Inc.  All rights reserved.
+ */
+
+#define RS_MASK 0x1f
+#define RS_SH 21
+#define RT_MASK 0x1f
+#define RT_SH 16
+#define SCIMM_MASK 0xfffff
+#define SCIMM_SH 6
+
+/* This macro sets the non-variable bits of an instruction. */
+#define M(a, b, c, d, e, f) \
+ ((a) << OP_SH \
+ | (b) << RS_SH \
+ | (c) << RT_SH \
+ | (d) << RD_SH \
+ | (e) << RE_SH \
+ | (f) << FUNC_SH)
+
+static struct insn insn_table[] __uasminitdata = {
+ { insn_addu, M(spec_op, 0, 0, 0, 0, addu_op), RS | RT | RD },
+ { insn_addiu, M(addiu_op, 0, 0, 0, 0, 0), RS | RT | SIMM },
+ { insn_and, M(spec_op, 0, 0, 0, 0, and_op), RS | RT | RD },
+ { insn_andi, M(andi_op, 0, 0, 0, 0, 0), RS | RT | UIMM },
+ { insn_beq, M(beq_op, 0, 0, 0, 0, 0), RS | RT | BIMM },
+ { insn_beql, M(beql_op, 0, 0, 0, 0, 0), RS | RT | BIMM },
+ { insn_bgez, M(bcond_op, 0, bgez_op, 0, 0, 0), RS | BIMM },
+ { insn_bgezl, M(bcond_op, 0, bgezl_op, 0, 0, 0), RS | BIMM },
+ { insn_bltz, M(bcond_op, 0, bltz_op, 0, 0, 0), RS | BIMM },
+ { insn_bltzl, M(bcond_op, 0, bltzl_op, 0, 0, 0), RS | BIMM },
+ { insn_bne, M(bne_op, 0, 0, 0, 0, 0), RS | RT | BIMM },
+ { insn_cache,  M(cache_op, 0, 0, 0, 0, 0),  RS | RT | SIMM },
+ { insn_daddu, M(spec_op, 0, 0, 0, 0, daddu_op), RS | RT | RD },
+ { insn_daddiu, M(daddiu_op, 0, 0, 0, 0, 0), RS | RT | SIMM },
+ { insn_dmfc0, M(cop0_op, dmfc_op, 0, 0, 0, 0), RT | RD | SET},
+ { insn_dmtc0, M(cop0_op, dmtc_op, 0, 0, 0, 0), RT | RD | SET},
+ { insn_dsll, M(spec_op, 0, 0, 0, 0, dsll_op), RT | RD | RE },
+ { insn_dsll32, M(spec_op, 0, 0, 0, 0, dsll32_op), RT | RD | RE },
+ { insn_dsra, M(spec_op, 0, 0, 0, 0, dsra_op), RT | RD | RE },
+ { insn_dsrl, M(spec_op, 0, 0, 0, 0, dsrl_op), RT | RD | RE },
+ { insn_dsrl32, M(spec_op, 0, 0, 0, 0, dsrl32_op), RT | RD | RE },
+ { insn_drotr, M(spec_op, 1, 0, 0, 0, dsrl_op), RT | RD | RE },
+ { insn_drotr32, M(spec_op, 1, 0, 0, 0, dsrl32_op), RT | RD | RE },
+ { insn_dsubu, M(spec_op, 0, 0, 0, 0, dsubu_op), RS | RT | RD },
+ { insn_eret,  M(cop0_op, cop_op, 0, 0, 0, eret_op),  0 },
+ { insn_ext, M(spec3_op, 0, 0, 0, 0, ext_op), RS | RT | RD | RE },
+ { insn_ins, M(spec3_op, 0, 0, 0, 0, ins_op), RS | RT | RD | RE },
+ { insn_j,  M(j_op, 0, 0, 0, 0, 0),  JIMM },
+ { insn_jal,  M(jal_op, 0, 0, 0, 0, 0),  JIMM },
+ { insn_jr,  M(spec_op, 0, 0, 0, 0, jr_op),  RS },
+ { insn_ld,  M(ld_op, 0, 0, 0, 0, 0),  RS | RT | SIMM },
+ { insn_ll,  M(ll_op, 0, 0, 0, 0, 0),  RS | RT | SIMM },
+ { insn_lld,  M(lld_op, 0, 0, 0, 0, 0),  RS | RT | SIMM },
+ { insn_lui,  M(lui_op, 0, 0, 0, 0, 0),  RT | SIMM },
+ { insn_lw,  M(lw_op, 0, 0, 0, 0, 0),  RS | RT | SIMM },
+ { insn_mfc0,  M(cop0_op, mfc_op, 0, 0, 0, 0),  RT | RD | SET},
+ { insn_mtc0,  M(cop0_op, mtc_op, 0, 0, 0, 0),  RT | RD | SET},
+ { insn_or,  M(spec_op, 0, 0, 0, 0, or_op),  RS | RT | RD },
+ { insn_ori,  M(ori_op, 0, 0, 0, 0, 0),  RS | RT | UIMM },
+ { insn_pref,  M(pref_op, 0, 0, 0, 0, 0),  RS | RT | SIMM },
+ { insn_rfe,  M(cop0_op, cop_op, 0, 0, 0, rfe_op),  0 },
+ { insn_sc,  M(sc_op, 0, 0, 0, 0, 0),  RS | RT | SIMM },
+ { insn_scd,  M(scd_op, 0, 0, 0, 0, 0),  RS | RT | SIMM },
+ { insn_sd,  M(sd_op, 0, 0, 0, 0, 0),  RS | RT | SIMM },
+ { insn_sll,  M(spec_op, 0, 0, 0, 0, sll_op),  RT | RD | RE },
+ { insn_sra,  M(spec_op, 0, 0, 0, 0, sra_op),  RT | RD | RE },
+ { insn_srl,  M(spec_op, 0, 0, 0, 0, srl_op),  RT | RD | RE },
+ { insn_rotr,  M(spec_op, 1, 0, 0, 0, srl_op),  RT | RD | RE },
+ { insn_subu,  M(spec_op, 0, 0, 0, 0, subu_op),  RS | RT | RD },
+ { insn_sw,  M(sw_op, 0, 0, 0, 0, 0),  RS | RT | SIMM },
+ { insn_tlbp,  M(cop0_op, cop_op, 0, 0, 0, tlbp_op),  0 },
+ { insn_tlbr,  M(cop0_op, cop_op, 0, 0, 0, tlbr_op),  0 },
+ { insn_tlbwi,  M(cop0_op, cop_op, 0, 0, 0, tlbwi_op),  0 },
+ { insn_tlbwr,  M(cop0_op, cop_op, 0, 0, 0, tlbwr_op),  0 },
+ { insn_xor,  M(spec_op, 0, 0, 0, 0, xor_op),  RS | RT | RD },
+ { insn_xori,  M(xori_op, 0, 0, 0, 0, 0),  RS | RT | UIMM },
+ { insn_dins, M(spec3_op, 0, 0, 0, 0, dins_op), RS | RT | RD | RE },
+ { insn_dinsm, M(spec3_op, 0, 0, 0, 0, dinsm_op), RS | RT | RD | RE },
+ { insn_syscall, M(spec_op, 0, 0, 0, 0, syscall_op), SCIMM},
+ { insn_bbit0, M(lwc2_op, 0, 0, 0, 0, 0), RS | RT | BIMM },
+ { insn_bbit1, M(swc2_op, 0, 0, 0, 0, 0), RS | RT | BIMM },
+ { insn_lwx, M(spec3_op, 0, 0, 0, lwx_op, lx_op), RS | RT | RD },
+ { insn_ldx, M(spec3_op, 0, 0, 0, ldx_op, lx_op), RS | RT | RD },
+ { insn_invalid, 0, 0 }
+};
+
+#undef M
+
+static inline __uasminit u32 build_bimm(s32 arg)
+{
+ if(arg > 0x1ffff || arg < -0x20000)
+ printk(KERN_WARNING "Micro-assembler field overflow\n");
+
+ if(arg & 0x3)
+ printk(KERN_WARNING "Invalid micro-assembler branch target\n");
+
+ return ((arg < 0) ? (1 << 15) : 0) | ((arg >> 2) & 0x7fff);
+}
+
+static inline __uasminit u32 build_jimm(u32 arg)
+{
+ if(arg & ~(JIMM_MASK << 2))
+ printk(KERN_WARNING "Micro-assembler field overflow\n");
+
+ return (arg >> 2) & JIMM_MASK;
+}
+
+/*
+ * The order of opcode arguments is implicitly left to right,
+ * starting with RS and ending with FUNC or IMM.
+ */
+static void __uasminit build_insn(u32 **buf, enum opcode opc, ...)
+{
+ struct insn *ip = NULL;
+ unsigned int i;
+ va_list ap;
+ u32 op;
+
+ for (i = 0; insn_table[i].opcode != insn_invalid; i++)
+ if (insn_table[i].opcode == opc) {
+ ip = &insn_table[i];
+ break;
+ }
+
+ if (!ip || (opc == insn_daddiu && r4k_daddiu_bug()))
+ panic("Unsupported Micro-assembler instruction %d", opc);
+
+ op = ip->match;
+ va_start(ap, opc);
+ if (ip->fields & RS)
+ op |= build_rs(va_arg(ap, u32));
+ if (ip->fields & RT)
+ op |= build_rt(va_arg(ap, u32));
+ if (ip->fields & RD)
+ op |= build_rd(va_arg(ap, u32));
+ if (ip->fields & RE)
+ op |= build_re(va_arg(ap, u32));
+ if (ip->fields & SIMM)
+ op |= build_simm(va_arg(ap, s32));
+ if (ip->fields & UIMM)
+ op |= build_uimm(va_arg(ap, u32));
+ if (ip->fields & BIMM)
+ op |= build_bimm(va_arg(ap, s32));
+ if (ip->fields & JIMM)
+ op |= build_jimm(va_arg(ap, u32));
+ if (ip->fields & FUNC)
+ op |= build_func(va_arg(ap, u32));
+ if (ip->fields & SET)
+ op |= build_set(va_arg(ap, u32));
+ if (ip->fields & SCIMM)
+ op |= build_scimm(va_arg(ap, u32));
+ va_end(ap);
+
+ **buf = op;
+ (*buf)++;
+}
+
+static inline void __uasminit
+__resolve_relocs(struct uasm_reloc *rel, struct uasm_label *lab)
+{
+ long laddr = (long)lab->addr;
+ long raddr = (long)rel->addr;
+
+ switch (rel->type) {
+ case R_MIPS_PC16:
+ *rel->addr |= build_bimm(laddr - (raddr + 4));
+ break;
+
+ default:
+ panic("Unsupported Micro-assembler relocation %d",
+      rel->type);
+ }
+}
diff --git a/arch/mips/mm/uasm.c b/arch/mips/mm/uasm.c
index 39b8910..d3b01b90 100644
--- a/arch/mips/mm/uasm.c
+++ b/arch/mips/mm/uasm.c
@@ -10,6 +10,7 @@
  * Copyright (C) 2004, 2005, 2006, 2008  Thiemo Seufer
  * Copyright (C) 2005, 2007  Maciej W. Rozycki
  * Copyright (C) 2006  Ralf Baechle ([hidden email])
+ * Copyright (C) 2012 MIPS Technologies, Inc.  All rights reserved.
  */
 
 #include <linux/kernel.h>
@@ -38,9 +39,18 @@ enum fields {
 #define OP_MASK 0x3f
 #define OP_SH 26
 #define RS_MASK 0x1f
-#define RS_SH 21
 #define RT_MASK 0x1f
+#ifdef CONFIG_CPU_MICROMIPS
+#define RS_SH 16
+#define RT_SH 21
+#define SCIMM_MASK 0x3ff
+#define SCIMM_SH 16
+#else
+#define RS_SH 21
 #define RT_SH 16
+#define SCIMM_MASK 0xfffff
+#define SCIMM_SH 6
+#endif
 #define RD_MASK 0x1f
 #define RD_SH 11
 #define RE_MASK 0x1f
@@ -53,8 +63,6 @@ enum fields {
 #define FUNC_SH 0
 #define SET_MASK 0x7
 #define SET_SH 0
-#define SCIMM_MASK 0xfffff
-#define SCIMM_SH 6
 
 enum opcode {
  insn_invalid,
@@ -77,217 +85,83 @@ struct insn {
  enum fields fields;
 };
 
-/* This macro sets the non-variable bits of an instruction. */
-#define M(a, b, c, d, e, f) \
- ((a) << OP_SH \
- | (b) << RS_SH \
- | (c) << RT_SH \
- | (d) << RD_SH \
- | (e) << RE_SH \
- | (f) << FUNC_SH)
-
-static struct insn insn_table[] __uasminitdata = {
- { insn_addiu, M(addiu_op, 0, 0, 0, 0, 0), RS | RT | SIMM },
- { insn_addu, M(spec_op, 0, 0, 0, 0, addu_op), RS | RT | RD },
- { insn_andi, M(andi_op, 0, 0, 0, 0, 0), RS | RT | UIMM },
- { insn_and, M(spec_op, 0, 0, 0, 0, and_op), RS | RT | RD },
- { insn_bbit0, M(lwc2_op, 0, 0, 0, 0, 0), RS | RT | BIMM },
- { insn_bbit1, M(swc2_op, 0, 0, 0, 0, 0), RS | RT | BIMM },
- { insn_beql, M(beql_op, 0, 0, 0, 0, 0), RS | RT | BIMM },
- { insn_beq, M(beq_op, 0, 0, 0, 0, 0), RS | RT | BIMM },
- { insn_bgezl, M(bcond_op, 0, bgezl_op, 0, 0, 0), RS | BIMM },
- { insn_bgez, M(bcond_op, 0, bgez_op, 0, 0, 0), RS | BIMM },
- { insn_bltzl, M(bcond_op, 0, bltzl_op, 0, 0, 0), RS | BIMM },
- { insn_bltz, M(bcond_op, 0, bltz_op, 0, 0, 0), RS | BIMM },
- { insn_bne, M(bne_op, 0, 0, 0, 0, 0), RS | RT | BIMM },
- { insn_cache,  M(cache_op, 0, 0, 0, 0, 0),  RS | RT | SIMM },
- { insn_daddiu, M(daddiu_op, 0, 0, 0, 0, 0), RS | RT | SIMM },
- { insn_daddu, M(spec_op, 0, 0, 0, 0, daddu_op), RS | RT | RD },
- { insn_dinsm, M(spec3_op, 0, 0, 0, 0, dinsm_op), RS | RT | RD | RE },
- { insn_dins, M(spec3_op, 0, 0, 0, 0, dins_op), RS | RT | RD | RE },
- { insn_dmfc0, M(cop0_op, dmfc_op, 0, 0, 0, 0), RT | RD | SET},
- { insn_dmtc0, M(cop0_op, dmtc_op, 0, 0, 0, 0), RT | RD | SET},
- { insn_drotr32, M(spec_op, 1, 0, 0, 0, dsrl32_op), RT | RD | RE },
- { insn_drotr, M(spec_op, 1, 0, 0, 0, dsrl_op), RT | RD | RE },
- { insn_dsll32, M(spec_op, 0, 0, 0, 0, dsll32_op), RT | RD | RE },
- { insn_dsll, M(spec_op, 0, 0, 0, 0, dsll_op), RT | RD | RE },
- { insn_dsra, M(spec_op, 0, 0, 0, 0, dsra_op), RT | RD | RE },
- { insn_dsrl32, M(spec_op, 0, 0, 0, 0, dsrl32_op), RT | RD | RE },
- { insn_dsrl, M(spec_op, 0, 0, 0, 0, dsrl_op), RT | RD | RE },
- { insn_dsubu, M(spec_op, 0, 0, 0, 0, dsubu_op), RS | RT | RD },
- { insn_eret,  M(cop0_op, cop_op, 0, 0, 0, eret_op),  0 },
- { insn_ext, M(spec3_op, 0, 0, 0, 0, ext_op), RS | RT | RD | RE },
- { insn_ins, M(spec3_op, 0, 0, 0, 0, ins_op), RS | RT | RD | RE },
- { insn_j,  M(j_op, 0, 0, 0, 0, 0),  JIMM },
- { insn_jal,  M(jal_op, 0, 0, 0, 0, 0),  JIMM },
- { insn_j,  M(j_op, 0, 0, 0, 0, 0),  JIMM },
- { insn_jr,  M(spec_op, 0, 0, 0, 0, jr_op),  RS },
- { insn_ld,  M(ld_op, 0, 0, 0, 0, 0),  RS | RT | SIMM },
- { insn_ldx, M(spec3_op, 0, 0, 0, ldx_op, lx_op), RS | RT | RD },
- { insn_lld,  M(lld_op, 0, 0, 0, 0, 0),  RS | RT | SIMM },
- { insn_ll,  M(ll_op, 0, 0, 0, 0, 0),  RS | RT | SIMM },
- { insn_lui,  M(lui_op, 0, 0, 0, 0, 0),  RT | SIMM },
- { insn_lw,  M(lw_op, 0, 0, 0, 0, 0),  RS | RT | SIMM },
- { insn_lwx, M(spec3_op, 0, 0, 0, lwx_op, lx_op), RS | RT | RD },
- { insn_mfc0,  M(cop0_op, mfc_op, 0, 0, 0, 0),  RT | RD | SET},
- { insn_mtc0,  M(cop0_op, mtc_op, 0, 0, 0, 0),  RT | RD | SET},
- { insn_ori,  M(ori_op, 0, 0, 0, 0, 0),  RS | RT | UIMM },
- { insn_or,  M(spec_op, 0, 0, 0, 0, or_op),  RS | RT | RD },
- { insn_pref,  M(pref_op, 0, 0, 0, 0, 0),  RS | RT | SIMM },
- { insn_rfe,  M(cop0_op, cop_op, 0, 0, 0, rfe_op),  0 },
- { insn_rotr,  M(spec_op, 1, 0, 0, 0, srl_op),  RT | RD | RE },
- { insn_scd,  M(scd_op, 0, 0, 0, 0, 0),  RS | RT | SIMM },
- { insn_sc,  M(sc_op, 0, 0, 0, 0, 0),  RS | RT | SIMM },
- { insn_sd,  M(sd_op, 0, 0, 0, 0, 0),  RS | RT | SIMM },
- { insn_sll,  M(spec_op, 0, 0, 0, 0, sll_op),  RT | RD | RE },
- { insn_sra,  M(spec_op, 0, 0, 0, 0, sra_op),  RT | RD | RE },
- { insn_srl,  M(spec_op, 0, 0, 0, 0, srl_op),  RT | RD | RE },
- { insn_subu,  M(spec_op, 0, 0, 0, 0, subu_op),  RS | RT | RD },
- { insn_sw,  M(sw_op, 0, 0, 0, 0, 0),  RS | RT | SIMM },
- { insn_syscall, M(spec_op, 0, 0, 0, 0, syscall_op), SCIMM},
- { insn_tlbp,  M(cop0_op, cop_op, 0, 0, 0, tlbp_op),  0 },
- { insn_tlbr,  M(cop0_op, cop_op, 0, 0, 0, tlbr_op),  0 },
- { insn_tlbwi,  M(cop0_op, cop_op, 0, 0, 0, tlbwi_op),  0 },
- { insn_tlbwr,  M(cop0_op, cop_op, 0, 0, 0, tlbwr_op),  0 },
- { insn_xori,  M(xori_op, 0, 0, 0, 0, 0),  RS | RT | UIMM },
- { insn_xor,  M(spec_op, 0, 0, 0, 0, xor_op),  RS | RT | RD },
- { insn_invalid, 0, 0 }
-};
-
-#undef M
-
 static inline __uasminit u32 build_rs(u32 arg)
 {
- WARN(arg & ~RS_MASK, KERN_WARNING "Micro-assembler field overflow\n");
+ if (arg & ~RS_MASK)
+ printk(KERN_WARNING "Micro-assembler RS field overflow\n");
 
  return (arg & RS_MASK) << RS_SH;
 }
 
 static inline __uasminit u32 build_rt(u32 arg)
 {
- WARN(arg & ~RT_MASK, KERN_WARNING "Micro-assembler field overflow\n");
+ if (arg & ~RT_MASK)
+ printk(KERN_WARNING "Micro-assembler RT field overflow\n");
 
  return (arg & RT_MASK) << RT_SH;
 }
 
 static inline __uasminit u32 build_rd(u32 arg)
 {
- WARN(arg & ~RD_MASK, KERN_WARNING "Micro-assembler field overflow\n");
+ if (arg & ~RD_MASK)
+ printk(KERN_WARNING "Micro-assembler RD field overflow\n");
 
  return (arg & RD_MASK) << RD_SH;
 }
 
 static inline __uasminit u32 build_re(u32 arg)
 {
- WARN(arg & ~RE_MASK, KERN_WARNING "Micro-assembler field overflow\n");
+ if (arg & ~RE_MASK)
+ printk(KERN_WARNING "Micro-assembler RE field overflow\n");
 
  return (arg & RE_MASK) << RE_SH;
 }
 
 static inline __uasminit u32 build_simm(s32 arg)
 {
- WARN(arg > 0x7fff || arg < -0x8000,
-     KERN_WARNING "Micro-assembler field overflow\n");
+ if (arg > 0x7fff || arg < -0x8000)
+ printk(KERN_WARNING "Micro-assembler SIMM field overflow\n");
 
  return arg & 0xffff;
 }
 
 static inline __uasminit u32 build_uimm(u32 arg)
 {
- WARN(arg & ~IMM_MASK, KERN_WARNING "Micro-assembler field overflow\n");
+ if (arg & ~IMM_MASK)
+ printk(KERN_WARNING "Micro-assembler UIMM field overflow\n");
 
  return arg & IMM_MASK;
 }
 
-static inline __uasminit u32 build_bimm(s32 arg)
-{
- WARN(arg > 0x1ffff || arg < -0x20000,
-     KERN_WARNING "Micro-assembler field overflow\n");
-
- WARN(arg & 0x3, KERN_WARNING "Invalid micro-assembler branch target\n");
-
- return ((arg < 0) ? (1 << 15) : 0) | ((arg >> 2) & 0x7fff);
-}
-
-static inline __uasminit u32 build_jimm(u32 arg)
-{
- WARN(arg & ~(JIMM_MASK << 2),
-     KERN_WARNING "Micro-assembler field overflow\n");
-
- return (arg >> 2) & JIMM_MASK;
-}
-
 static inline __uasminit u32 build_scimm(u32 arg)
 {
- WARN(arg & ~SCIMM_MASK,
-     KERN_WARNING "Micro-assembler field overflow\n");
+ if (arg & ~SCIMM_MASK)
+ printk(KERN_WARNING "Micro-assembler SCIMM field overflow\n");
 
  return (arg & SCIMM_MASK) << SCIMM_SH;
 }
 
 static inline __uasminit u32 build_func(u32 arg)
 {
- WARN(arg & ~FUNC_MASK, KERN_WARNING "Micro-assembler field overflow\n");
+ if (arg & ~FUNC_MASK)
+ printk(KERN_WARNING "Micro-assembler FUNC field overflow\n");
 
  return arg & FUNC_MASK;
 }
 
 static inline __uasminit u32 build_set(u32 arg)
 {
- WARN(arg & ~SET_MASK, KERN_WARNING "Micro-assembler field overflow\n");
+ if (arg & ~SET_MASK)
+ printk(KERN_WARNING "Micro-assembler SET field overflow\n");
 
  return arg & SET_MASK;
 }
 
-/*
- * The order of opcode arguments is implicitly left to right,
- * starting with RS and ending with FUNC or IMM.
- */
-static void __uasminit build_insn(u32 **buf, enum opcode opc, ...)
-{
- struct insn *ip = NULL;
- unsigned int i;
- va_list ap;
- u32 op;
-
- for (i = 0; insn_table[i].opcode != insn_invalid; i++)
- if (insn_table[i].opcode == opc) {
- ip = &insn_table[i];
- break;
- }
-
- if (!ip || (opc == insn_daddiu && r4k_daddiu_bug()))
- panic("Unsupported Micro-assembler instruction %d", opc);
-
- op = ip->match;
- va_start(ap, opc);
- if (ip->fields & RS)
- op |= build_rs(va_arg(ap, u32));
- if (ip->fields & RT)
- op |= build_rt(va_arg(ap, u32));
- if (ip->fields & RD)
- op |= build_rd(va_arg(ap, u32));
- if (ip->fields & RE)
- op |= build_re(va_arg(ap, u32));
- if (ip->fields & SIMM)
- op |= build_simm(va_arg(ap, s32));
- if (ip->fields & UIMM)
- op |= build_uimm(va_arg(ap, u32));
- if (ip->fields & BIMM)
- op |= build_bimm(va_arg(ap, s32));
- if (ip->fields & JIMM)
- op |= build_jimm(va_arg(ap, u32));
- if (ip->fields & FUNC)
- op |= build_func(va_arg(ap, u32));
- if (ip->fields & SET)
- op |= build_set(va_arg(ap, u32));
- if (ip->fields & SCIMM)
- op |= build_scimm(va_arg(ap, u32));
- va_end(ap);
-
- **buf = op;
- (*buf)++;
-}
+#ifdef CONFIG_CPU_MICROMIPS
+#include "uasm-micromips.c"
+#else
+#include "uasm-mips.c"
+#endif
 
 #define I_u1u2u3(op) \
 Ip_u1u2u3(op) \
@@ -552,23 +426,6 @@ uasm_r_mips_pc16(struct uasm_reloc **rel, u32 *addr, int lid)
 }
 UASM_EXPORT_SYMBOL(uasm_r_mips_pc16);
 
-static inline void __uasminit
-__resolve_relocs(struct uasm_reloc *rel, struct uasm_label *lab)
-{
- long laddr = (long)lab->addr;
- long raddr = (long)rel->addr;
-
- switch (rel->type) {
- case R_MIPS_PC16:
- *rel->addr |= build_bimm(laddr - (raddr + 4));
- break;
-
- default:
- panic("Unsupported Micro-assembler relocation %d",
-      rel->type);
- }
-}
-
 void __uasminit
 uasm_resolve_relocs(struct uasm_reloc *rel, struct uasm_label *lab)
 {
--
1.7.9.5


Reply | Threaded
Open this post in threaded view
|

[PATCH v99,02/13] MIPS: Whitespace clean-ups after microMIPS additions.

Steven J. Hill-3
In reply to this post by Steven J. Hill-3
From: "Steven J. Hill" <[hidden email]>

Clean-up tabs, spaces, macros, etc. after adding in microMIPS
instructions for the micro-assembler.

Signed-off-by: Steven J. Hill <[hidden email]>
---
 arch/mips/include/asm/inst.h     |  134 +++++++++++++++++++-------------------
 arch/mips/include/asm/mipsregs.h |   40 +++++++-----
 arch/mips/kernel/proc.c          |    1 +
 arch/mips/kernel/traps.c         |    4 +-
 4 files changed, 92 insertions(+), 87 deletions(-)

diff --git a/arch/mips/include/asm/inst.h b/arch/mips/include/asm/inst.h
index c76899f..2b2e0e3 100644
--- a/arch/mips/include/asm/inst.h
+++ b/arch/mips/include/asm/inst.h
@@ -262,7 +262,7 @@ struct ma_format { /* FPU multiply and add format (MIPS IV) */
  unsigned int fmt : 2;
 };
 
-struct b_format { /* BREAK and SYSCALL */
+struct b_format { /* BREAK and SYSCALL */
  unsigned int opcode:6;
  unsigned int code:20;
  unsigned int func:6;
@@ -276,7 +276,7 @@ struct fb_format { /* FPU branch format */
  unsigned int simmediate:16;
 };
 
-struct fp0_format {      /* FPU multipy and add format (MIPS32) */
+struct fp0_format { /* FPU multipy and add format (MIPS32) */
  unsigned int opcode:6;
  unsigned int fmt:5;
  unsigned int ft:5;
@@ -285,7 +285,7 @@ struct fp0_format {      /* FPU multipy and add format (MIPS32) */
  unsigned int func:6;
 };
 
-struct mm_fp0_format {      /* FPU multipy and add format (microMIPS) */
+struct mm_fp0_format { /* FPU multipy and add format (microMIPS) */
  unsigned int opcode:6;
  unsigned int ft:5;
  unsigned int fs:5;
@@ -295,7 +295,7 @@ struct mm_fp0_format {      /* FPU multipy and add format (microMIPS) */
  unsigned int func:6;
 };
 
-struct fp1_format {      /* FPU mfc1 and cfc1 format (MIPS32) */
+struct fp1_format { /* FPU mfc1 and cfc1 format (MIPS32) */
  unsigned int opcode:6;
  unsigned int op:5;
  unsigned int rt:5;
@@ -304,7 +304,7 @@ struct fp1_format {      /* FPU mfc1 and cfc1 format (MIPS32) */
  unsigned int func:6;
 };
 
-struct mm_fp1_format {      /* FPU mfc1 and cfc1 format (microMIPS) */
+struct mm_fp1_format { /* FPU mfc1 and cfc1 format (microMIPS) */
  unsigned int opcode:6;
  unsigned int rt:5;
  unsigned int fs:5;
@@ -313,7 +313,7 @@ struct mm_fp1_format {      /* FPU mfc1 and cfc1 format (microMIPS) */
  unsigned int func:6;
 };
 
-struct mm_fp2_format {      /* FPU movt and movf format (microMIPS) */
+struct mm_fp2_format { /* FPU movt and movf format (microMIPS) */
  unsigned int opcode:6;
  unsigned int fd:5;
  unsigned int fs:5;
@@ -324,7 +324,7 @@ struct mm_fp2_format {      /* FPU movt and movf format (microMIPS) */
  unsigned int func:6;
 };
 
-struct mm_fp3_format {      /* FPU abs and neg format (microMIPS) */
+struct mm_fp3_format { /* FPU abs and neg format (microMIPS) */
  unsigned int opcode:6;
  unsigned int rt:5;
  unsigned int fs:5;
@@ -333,7 +333,7 @@ struct mm_fp3_format {      /* FPU abs and neg format (microMIPS) */
  unsigned int func:6;
 };
 
-struct mm_fp4_format {      /* FPU c.cond format (microMIPS) */
+struct mm_fp4_format { /* FPU c.cond format (microMIPS) */
  unsigned int opcode:6;
  unsigned int rt:5;
  unsigned int fs:5;
@@ -343,7 +343,7 @@ struct mm_fp4_format {      /* FPU c.cond format (microMIPS) */
  unsigned int func:6;
 };
 
-struct mm_fp5_format {      /* FPU lwxc1 and swxc1 format (microMIPS) */
+struct mm_fp5_format { /* FPU lwxc1 and swxc1 format (microMIPS) */
  unsigned int opcode:6;
  unsigned int index:5;
  unsigned int base:5;
@@ -370,20 +370,20 @@ struct mm_fp6_format { /* FPU madd and msub format (microMIPS) */
  unsigned int func:6;
 };
 
-struct mm16b1_format { /* microMIPS 16-bit branch format */
+struct mm16b1_format { /* microMIPS 16-bit branch format */
  unsigned int opcode:6;
  unsigned int rs:3;
  signed int simmediate:7;
  unsigned int duplicate:16; /* a copy of the instr */
 };
 
-struct mm16b0_format { /* microMIPS 16-bit branch format */
+struct mm16b0_format { /* microMIPS 16-bit branch format */
  unsigned int opcode:6;
  signed int simmediate:10;
  unsigned int duplicate:16; /* a copy of the instr */
 };
 
-struct mm_i_format { /* Immediate format (addi, lw, ...) */
+struct mm_i_format { /* Immediate format (addi, lw, ...) */
  unsigned int opcode:6;
  unsigned int rt:5;
  unsigned int rs:5;
@@ -495,72 +495,72 @@ struct j_format { /* Jump format */
 };
 
 struct i_format { /* Immediate format */
- signed int simmediate : 16;
- unsigned int rt : 5;
- unsigned int rs : 5;
- unsigned int opcode : 6;
+ signed int simmediate:16;
+ unsigned int rt:5;
+ unsigned int rs:5;
+ unsigned int opcode:6;
 };
 
 struct u_format { /* Unsigned immediate format */
- unsigned int uimmediate : 16;
- unsigned int rt : 5;
- unsigned int rs : 5;
- unsigned int opcode : 6;
+ unsigned int uimmediate:16;
+ unsigned int rt:5;
+ unsigned int rs:5;
+ unsigned int opcode:6;
 };
 
 struct c_format { /* Cache (>= R6000) format */
- unsigned int simmediate : 16;
- unsigned int cache : 2;
- unsigned int c_op : 3;
- unsigned int rs : 5;
- unsigned int opcode : 6;
+ unsigned int simmediate:16;
+ unsigned int cache:2;
+ unsigned int c_op:3;
+ unsigned int rs:5;
+ unsigned int opcode:6;
 };
 
 struct r_format { /* Register format */
- unsigned int func : 6;
- unsigned int re : 5;
- unsigned int rd : 5;
- unsigned int rt : 5;
- unsigned int rs : 5;
- unsigned int opcode : 6;
+ unsigned int func:6;
+ unsigned int re:5;
+ unsigned int rd:5;
+ unsigned int rt:5;
+ unsigned int rs:5;
+ unsigned int opcode:6;
 };
 
 struct p_format { /* Performance counter format (R10000) */
- unsigned int func : 6;
- unsigned int re : 5;
- unsigned int rd : 5;
- unsigned int rt : 5;
- unsigned int rs : 5;
- unsigned int opcode : 6;
+ unsigned int func:6;
+ unsigned int re:5;
+ unsigned int rd:5;
+ unsigned int rt:5;
+ unsigned int rs:5;
+ unsigned int opcode:6;
 };
 
 struct f_format { /* FPU register format */
- unsigned int func : 6;
- unsigned int re : 5;
- unsigned int rd : 5;
- unsigned int rt : 5;
- unsigned int fmt : 4;
- unsigned int : 1;
- unsigned int opcode : 6;
+ unsigned int func:6;
+ unsigned int re:5;
+ unsigned int rd:5;
+ unsigned int rt:5;
+ unsigned int fmt:4;
+ unsigned int:1;
+ unsigned int opcode:6;
 };
 
-struct ma_format { /* FPU multiply and add format (MIPS IV) */
- unsigned int fmt : 2;
- unsigned int func : 4;
- unsigned int fd : 5;
- unsigned int fs : 5;
- unsigned int ft : 5;
- unsigned int fr : 5;
- unsigned int opcode : 6;
+struct ma_format { /* FPU multipy and add format (MIPS IV) */
+ unsigned int fmt:2;
+ unsigned int func:4;
+ unsigned int fd:5;
+ unsigned int fs:5;
+ unsigned int ft:5;
+ unsigned int fr:5;
+ unsigned int opcode:6;
 };
 
-struct b_format { /* BREAK and SYSCALL */
+struct b_format { /* BREAK and SYSCALL */
  unsigned int func:6;
  unsigned int code:20;
  unsigned int opcode:6;
 };
 
-struct fb_format { /* FPU branch format */
+struct fb_format { /* FPU branch format */
  unsigned int simmediate:16;
  unsigned int flag:2;
  unsigned int cc:3;
@@ -568,7 +568,7 @@ struct fb_format { /* FPU branch format */
  unsigned int opcode:6;
 };
 
-struct fp0_format { /* FPU multipy and add format (MIPS32) */
+struct fp0_format { /* FPU multipy and add format (MIPS32) */
  unsigned int func:6;
  unsigned int fd:5;
  unsigned int fs:5;
@@ -577,7 +577,7 @@ struct fp0_format { /* FPU multipy and add format (MIPS32) */
  unsigned int opcode:6;
 };
 
-struct mm_fp0_format { /* FPU multipy and add format (microMIPS) */
+struct mm_fp0_format { /* FPU multipy and add format (microMIPS) */
  unsigned int func:6;
  unsigned int op:2;
  unsigned int fmt:3;
@@ -587,7 +587,7 @@ struct mm_fp0_format { /* FPU multipy and add format (microMIPS) */
  unsigned int opcode:6;
 };
 
-struct fp1_format { /* FPU mfc1 and cfc1 format (MIPS32) */
+struct fp1_format { /* FPU mfc1 and cfc1 format (MIPS32) */
  unsigned int func:6;
  unsigned int fd:5;
  unsigned int fs:5;
@@ -596,7 +596,7 @@ struct fp1_format { /* FPU mfc1 and cfc1 format (MIPS32) */
  unsigned int opcode:6;
 };
 
-struct mm_fp1_format { /* FPU mfc1 and cfc1 format (microMIPS) */
+struct mm_fp1_format { /* FPU mfc1 and cfc1 format (microMIPS) */
  unsigned int func:6;
  unsigned int op:8;
  unsigned int fmt:2;
@@ -605,7 +605,7 @@ struct mm_fp1_format { /* FPU mfc1 and cfc1 format (microMIPS) */
  unsigned int opcode:6;
 };
 
-struct mm_fp2_format { /* FPU movt and movf format (microMIPS) */
+struct mm_fp2_format { /* FPU movt and movf format (microMIPS) */
  unsigned int func:6;
  unsigned int op:3;
  unsigned int fmt:2;
@@ -616,7 +616,7 @@ struct mm_fp2_format { /* FPU movt and movf format (microMIPS) */
  unsigned int opcode:6;
 };
 
-struct mm_fp3_format { /* FPU abs and neg format (microMIPS) */
+struct mm_fp3_format { /* FPU abs and neg format (microMIPS) */
  unsigned int func:6;
  unsigned int op:7;
  unsigned int fmt:3;
@@ -625,7 +625,7 @@ struct mm_fp3_format { /* FPU abs and neg format (microMIPS) */
  unsigned int opcode:6;
 };
 
-struct mm_fp4_format { /* FPU c.cond format (microMIPS) */
+struct mm_fp4_format { /* FPU c.cond format (microMIPS) */
  unsigned int func:6;
  unsigned int cond:4;
  unsigned int fmt:3;
@@ -635,7 +635,7 @@ struct mm_fp4_format { /* FPU c.cond format (microMIPS) */
  unsigned int opcode:6;
 };
 
-struct mm_fp5_format { /* FPU lwxc1 and swxc1 format (microMIPS) */
+struct mm_fp5_format { /* FPU lwxc1 and swxc1 format (microMIPS) */
  unsigned int func:6;
  unsigned int op:5;
  unsigned int fd:5;
@@ -644,7 +644,7 @@ struct mm_fp5_format { /* FPU lwxc1 and swxc1 format (microMIPS) */
  unsigned int opcode:6;
 };
 
-struct fp6_format { /* FPU madd and msub format (MIPS IV) */
+struct fp6_format { /* FPU madd and msub format (MIPS IV) */
  unsigned int func:6;
  unsigned int fd:5;
  unsigned int fs:5;
@@ -653,7 +653,7 @@ struct fp6_format { /* FPU madd and msub format (MIPS IV) */
  unsigned int opcode:6;
 };
 
-struct mm_fp6_format { /* FPU madd and msub format (microMIPS) */
+struct mm_fp6_format { /* FPU madd and msub format (microMIPS) */
  unsigned int func:6;
  unsigned int fr:5;
  unsigned int fd:5;
@@ -662,20 +662,20 @@ struct mm_fp6_format { /* FPU madd and msub format (microMIPS) */
  unsigned int opcode:6;
 };
 
-struct mm16b1_format { /* microMIPS 16-bit branch format */
+struct mm16b1_format { /* microMIPS 16-bit branch format */
  unsigned int duplicate:16; /* a copy of the instr */
  signed int simmediate:7;
  unsigned int rs:3;
  unsigned int opcode:6;
 };
 
-struct mm16b0_format { /* microMIPS 16-bit branch format */
+struct mm16b0_format { /* microMIPS 16-bit branch format */
  unsigned int duplicate:16; /* a copy of the instr */
  signed int simmediate:10;
  unsigned int opcode:6;
 };
 
-struct mm_i_format { /* Immediate format */
+struct mm_i_format { /* Immediate format */
  signed int simmediate:16;
  unsigned int rs:5;
  unsigned int rt:5;
diff --git a/arch/mips/include/asm/mipsregs.h b/arch/mips/include/asm/mipsregs.h
index 0c0e4a6..4b55a5a 100644
--- a/arch/mips/include/asm/mipsregs.h
+++ b/arch/mips/include/asm/mipsregs.h
@@ -1151,17 +1151,21 @@ do { \
 /*
  * Macros to access the floating point coprocessor control registers
  */
-#define read_32bit_cp1_register(source)                         \
-({ int __res;                                                   \
- __asm__ __volatile__(                                   \
- ".set\tpush\n\t" \
- ".set\treorder\n\t" \
- /* gas fails to assemble cfc1 for some archs (octeon).*/ \
- ".set\tmips1\n\t" \
-        "cfc1\t%0,"STR(source)"\n\t"                            \
- ".set\tpop" \
-        : "=r" (__res));                                        \
-        __res;})
+#define read_32bit_cp1_register(source) \
+({ \
+ int __res; \
+ \
+ __asm__ __volatile__( \
+ " .set push \n" \
+ " .set reorder \n" \
+ " # gas fails to assemble cfc1 for some archs, \n" \
+ " # like Octeon. \n" \
+ " .set mips1 \n" \
+ " cfc1 %0,"STR(source)" \n" \
+ " .set pop \n" \
+ : "=r" (__res)); \
+ __res; \
+})
 
 #ifdef HAVE_AS_DSP
 #define rddsp(mask) \
@@ -1298,12 +1302,12 @@ do { \
  unsigned int __res; \
  \
  __asm__ __volatile__( \
- " .set push \n" \
- " .set noat \n" \
- " # rddsp $1, %x1 \n" \
- " .word 0x7c000cb8 | (%x1 << 16) \n" \
- " move %0, $1 \n" \
- " .set pop \n" \
+ " .set push \n" \
+ " .set noat \n" \
+ " # rddsp $1, %x1 \n" \
+ " .word 0x7c000cb8 | (%x1 << 16) \n" \
+ " move %0, $1 \n" \
+ " .set pop \n" \
  : "=r" (__res) \
  : "i" (mask)); \
  __res; \
@@ -1318,7 +1322,7 @@ do { \
  " # wrdsp $1, %x1 \n" \
  " .word 0x7c2004f8 | (%x1 << 11) \n" \
  " .set pop \n" \
-        : \
+ : \
  : "r" (val), "i" (mask)); \
 } while (0)
 
diff --git a/arch/mips/kernel/proc.c b/arch/mips/kernel/proc.c
index 07dff54..239ae03 100644
--- a/arch/mips/kernel/proc.c
+++ b/arch/mips/kernel/proc.c
@@ -73,6 +73,7 @@ static int show_cpuinfo(struct seq_file *m, void *v)
  if (cpu_has_dsp) seq_printf(m, "%s", " dsp");
  if (cpu_has_dsp2) seq_printf(m, "%s", " dsp2");
  if (cpu_has_mipsmt) seq_printf(m, "%s", " mt");
+ if (cpu_has_mmips) seq_printf(m, "%s", " micromips");
  seq_printf(m, "\n");
 
  seq_printf(m, "shadow register sets\t: %d\n",
diff --git a/arch/mips/kernel/traps.c b/arch/mips/kernel/traps.c
index 9260986..cc7f4cc 100644
--- a/arch/mips/kernel/traps.c
+++ b/arch/mips/kernel/traps.c
@@ -514,7 +514,7 @@ static inline int simulate_ll(struct pt_regs *regs, unsigned int opcode)
  offset >>= 16;
 
  vaddr = (unsigned long __user *)
-        ((unsigned long)(regs->regs[(opcode & BASE) >> 21]) + offset);
+ ((unsigned long)(regs->regs[(opcode & BASE) >> 21]) + offset);
 
  if ((unsigned long)vaddr & 3)
  return SIGBUS;
@@ -554,7 +554,7 @@ static inline int simulate_sc(struct pt_regs *regs, unsigned int opcode)
  offset >>= 16;
 
  vaddr = (unsigned long __user *)
-        ((unsigned long)(regs->regs[(opcode & BASE) >> 21]) + offset);
+ ((unsigned long)(regs->regs[(opcode & BASE) >> 21]) + offset);
  reg = (opcode & RT) >> 16;
 
  if ((unsigned long)vaddr & 3)
--
1.7.9.5


Reply | Threaded
Open this post in threaded view
|

[PATCH v99,03/13] MIPS: microMIPS: Floating point support for 16-bit instructions.

Steven J. Hill-3
In reply to this post by Steven J. Hill-3
From: "Steven J. Hill" <[hidden email]>

Add logic needed to do floating point emulation when in microMIPS or
MIPS16e modes.

Signed-off-by: Leonid Yegoshin <[hidden email]>
Signed-off-by: Steven J. Hill <[hidden email]>
---
 arch/mips/include/asm/fpu_emulator.h |    7 +
 arch/mips/kernel/traps.c             |    2 +-
 arch/mips/math-emu/cp1emu.c          |  766 ++++++++++++++++++++++++++++++----
 arch/mips/math-emu/dsemul.c          |   40 +-
 4 files changed, 718 insertions(+), 97 deletions(-)

diff --git a/arch/mips/include/asm/fpu_emulator.h b/arch/mips/include/asm/fpu_emulator.h
index 3b40927..67d5028 100644
--- a/arch/mips/include/asm/fpu_emulator.h
+++ b/arch/mips/include/asm/fpu_emulator.h
@@ -54,6 +54,12 @@ do { \
 extern int mips_dsemul(struct pt_regs *regs, mips_instruction ir,
  unsigned long cpc);
 extern int do_dsemulret(struct pt_regs *xcp);
+extern int fpu_emulator_cop1Handler(struct pt_regs *xcp,
+    struct mips_fpu_struct *ctx, int has_fpu,
+    void *__user *fault_addr);
+int process_fpemu_return(int sig, void __user *fault_addr);
+int mm_isBranchInstr(struct pt_regs *regs, struct decoded_instn dec_insn,
+     unsigned long *contpc);
 
 /*
  * Instruction inserted following the badinst to further tag the sequence
@@ -64,5 +70,6 @@ extern int do_dsemulret(struct pt_regs *xcp);
  * Break instruction with special math emu break code set
  */
 #define BREAK_MATH (0x0000000d | (BRK_MEMU << 16))
+#define MM_BREAK_MATH (0x00000007 | (MM_BRK_MEMU << 16))
 
 #endif /* _ASM_FPU_EMULATOR_H */
diff --git a/arch/mips/kernel/traps.c b/arch/mips/kernel/traps.c
index cc7f4cc..c0dc176 100644
--- a/arch/mips/kernel/traps.c
+++ b/arch/mips/kernel/traps.c
@@ -671,7 +671,7 @@ asmlinkage void do_ov(struct pt_regs *regs)
  force_sig_info(SIGFPE, &info, current);
 }
 
-static int process_fpemu_return(int sig, void __user *fault_addr)
+int process_fpemu_return(int sig, void __user *fault_addr)
 {
  if (sig == SIGSEGV || sig == SIGBUS) {
  struct siginfo si = {0};
diff --git a/arch/mips/math-emu/cp1emu.c b/arch/mips/math-emu/cp1emu.c
index 47c77e7..d0fd160 100644
--- a/arch/mips/math-emu/cp1emu.c
+++ b/arch/mips/math-emu/cp1emu.c
@@ -45,6 +45,7 @@
 #include <asm/signal.h>
 #include <asm/mipsregs.h>
 #include <asm/fpu_emulator.h>
+#include <asm/fpu.h>
 #include <asm/uaccess.h>
 #include <asm/branch.h>
 
@@ -110,6 +111,477 @@ static const unsigned int fpucondbit[8] = {
 };
 #endif
 
+/* convert 16-bit register encoding to 32-bit register encoding */
+static const unsigned int reg16to32map[8] = {16, 17, 2, 3, 4, 5, 6, 7};
+
+/* convert micro_mips to mips32 format */
+static const int sd_format[] = {16, 17, 0, 0, 0, 0, 0, 0};
+static const int sdps_format[] = {16, 17, 22, 0, 0, 0, 0, 0};
+static const int dwl_format[] = {17, 20, 21, 0, 0, 0, 0, 0};
+static const int swl_format[] = {16, 20, 21, 0, 0, 0, 0, 0};
+
+/*
+ * This functions translates a 32 bit micro_mips instr into a 32 bit mips32 instr.
+ * It return 0 or SIGILL.
+ */
+static int micro_mips32_to_mips32(union mips_instruction *insn_ptr)
+{
+ union mips_instruction insn = *insn_ptr;
+ union mips_instruction mips32_insn = insn;  /* assume they are the same */
+ int func;
+ int fmt;
+ int op;
+
+ switch (insn.mm_i_format.opcode) {
+ case mm_ldc132_op:
+ mips32_insn.mm_i_format.opcode = ldc1_op;
+ mips32_insn.mm_i_format.rt = insn.mm_i_format.rs;
+ mips32_insn.mm_i_format.rs = insn.mm_i_format.rt;
+ break;
+ case mm_lwc132_op:
+ mips32_insn.mm_i_format.opcode = lwc1_op;
+ mips32_insn.mm_i_format.rt = insn.mm_i_format.rs;
+ mips32_insn.mm_i_format.rs = insn.mm_i_format.rt;
+ break;
+ case mm_sdc132_op:
+ mips32_insn.mm_i_format.opcode = sdc1_op;
+ mips32_insn.mm_i_format.rt = insn.mm_i_format.rs;
+ mips32_insn.mm_i_format.rs = insn.mm_i_format.rt;
+ break;
+ case mm_swc132_op:
+ mips32_insn.mm_i_format.opcode = swc1_op;
+ mips32_insn.mm_i_format.rt = insn.mm_i_format.rs;
+ mips32_insn.mm_i_format.rs = insn.mm_i_format.rt;
+ break;
+ case mm_pool32i_op:
+ /* NOTE: offset is << by 1 if in micro_mips mode */
+ if ((insn.mm_i_format.rt == mm_bc1f_op) || (insn.mm_i_format.rt == mm_bc1t_op)) {
+ mips32_insn.fb_format.opcode = cop1_op;
+ mips32_insn.fb_format.bc = bc_op;
+ mips32_insn.fb_format.flag = (insn.mm_i_format.rt == mm_bc1t_op) ? 1 : 0;
+ } else
+ return SIGILL;
+ break;
+ case mm_pool32f_op:
+ switch (insn.mm_fp0_format.func) {
+ case mm_32f_01_op:
+ case mm_32f_11_op:
+ case mm_32f_02_op:
+ case mm_32f_12_op:
+ case mm_32f_41_op:
+ case mm_32f_51_op:
+ case mm_32f_42_op:
+ case mm_32f_52_op:
+ op = insn.mm_fp0_format.func;
+ if (op == mm_32f_01_op)
+ func = madd_s_op;
+ else if (op == mm_32f_11_op)
+ func = madd_d_op;
+ else if (op == mm_32f_02_op)
+ func = nmadd_s_op;
+ else if (op == mm_32f_12_op)
+ func = nmadd_d_op;
+ else if (op == mm_32f_41_op)
+ func = msub_s_op;
+ else if (op == mm_32f_51_op)
+ func = msub_d_op;
+ else if (op == mm_32f_42_op)
+ func = nmsub_s_op;
+ else
+ func = nmsub_d_op;
+ mips32_insn.fp6_format.opcode = cop1x_op;
+ mips32_insn.fp6_format.fr = insn.mm_fp6_format.fr;
+ mips32_insn.fp6_format.ft = insn.mm_fp6_format.ft;
+ mips32_insn.fp6_format.fs = insn.mm_fp6_format.fs;
+ mips32_insn.fp6_format.fd = insn.mm_fp6_format.fd;
+ mips32_insn.fp6_format.func = func;
+ break;
+ case mm_32f_10_op:
+ func = -1;  /* set to invalid value */
+ op = insn.mm_fp5_format.op & 0x7;
+ if (op == mm_ldxc1_op)
+ func = ldxc1_op;
+ else if (op == mm_sdxc1_op)
+ func = sdxc1_op;
+ else if (op == mm_lwxc1_op)
+ func = lwxc1_op;
+ else if (op == mm_swxc1_op)
+ func = swxc1_op;
+
+ if (func != -1) {
+ mips32_insn.r_format.opcode = cop1x_op;
+ mips32_insn.r_format.rs = insn.mm_fp5_format.base;
+ mips32_insn.r_format.rt = insn.mm_fp5_format.index;
+ mips32_insn.r_format.rd = 0;
+ mips32_insn.r_format.re = insn.mm_fp5_format.fd;
+ mips32_insn.r_format.func = func;
+ } else
+ return SIGILL;
+ break;
+ case mm_32f_40_op:
+ op = -1;  /* set to invalid value */
+ if (insn.mm_fp2_format.op == mm_fmovt_op)
+ op = 1;
+ else if (insn.mm_fp2_format.op == mm_fmovf_op)
+ op = 0;
+ if (op != -1) {
+ mips32_insn.fp0_format.opcode = cop1_op;
+ mips32_insn.fp0_format.fmt = sdps_format[insn.mm_fp2_format.fmt];
+ mips32_insn.fp0_format.ft = (insn.mm_fp2_format.cc<<2) + op;
+ mips32_insn.fp0_format.fs = insn.mm_fp2_format.fs;
+ mips32_insn.fp0_format.fd = insn.mm_fp2_format.fd;
+ mips32_insn.fp0_format.func = fmovc_op;
+ } else
+ return SIGILL;
+ break;
+ case mm_32f_60_op:
+ func = -1;  /* set to invalid value */
+ if (insn.mm_fp0_format.op == mm_fadd_op)
+ func = fadd_op;
+ else if (insn.mm_fp0_format.op == mm_fsub_op)
+ func = fsub_op;
+ else if (insn.mm_fp0_format.op == mm_fmul_op)
+ func = fmul_op;
+ else if (insn.mm_fp0_format.op == mm_fdiv_op)
+ func = fdiv_op;
+ if (func != -1) {
+ mips32_insn.fp0_format.opcode = cop1_op;
+ mips32_insn.fp0_format.fmt = sdps_format[insn.mm_fp0_format.fmt];
+ mips32_insn.fp0_format.ft = insn.mm_fp0_format.ft;
+ mips32_insn.fp0_format.fs = insn.mm_fp0_format.fs;
+ mips32_insn.fp0_format.fd = insn.mm_fp0_format.fd;
+ mips32_insn.fp0_format.func = func;
+ } else
+ return SIGILL;
+ break;
+ case mm_32f_70_op:
+ func = -1;  /* set to invalid value */
+ if (insn.mm_fp0_format.op == mm_fmovn_op)
+ func = fmovn_op;
+ else if (insn.mm_fp0_format.op == mm_fmovz_op)
+ func = fmovz_op;
+ if (func != -1) {
+ mips32_insn.fp0_format.opcode = cop1_op;
+ mips32_insn.fp0_format.fmt = sdps_format[insn.mm_fp0_format.fmt];
+ mips32_insn.fp0_format.ft = insn.mm_fp0_format.ft;
+ mips32_insn.fp0_format.fs = insn.mm_fp0_format.fs;
+ mips32_insn.fp0_format.fd = insn.mm_fp0_format.fd;
+ mips32_insn.fp0_format.func = func;
+ } else
+ return SIGILL;
+ break;
+ case mm_32f_73_op:    /* POOL32FXF */
+ switch (insn.mm_fp1_format.op) {
+ case mm_movf0_op:
+ case mm_movf1_op:
+ case mm_movt0_op:
+ case mm_movt1_op:
+ if ((insn.mm_fp1_format.op & 0x7f) == mm_movf0_op)
+ op = 0;
+ else
+ op = 1;
+ mips32_insn.r_format.opcode = spec_op;
+ mips32_insn.r_format.rs = insn.mm_fp4_format.fs;
+ mips32_insn.r_format.rt = (insn.mm_fp4_format.cc<<2) + op;
+ mips32_insn.r_format.rd = insn.mm_fp4_format.rt;
+ mips32_insn.r_format.re = 0;
+ mips32_insn.r_format.func = movc_op;
+ break;
+ case mm_fcvtd0_op:
+ case mm_fcvtd1_op:
+ case mm_fcvts0_op:
+ case mm_fcvts1_op:
+ if ((insn.mm_fp1_format.op & 0x7f) == mm_fcvtd0_op) {
+ func = fcvtd_op;
+ fmt = swl_format[insn.mm_fp3_format.fmt];
+ } else {
+ func = fcvts_op;
+ fmt = dwl_format[insn.mm_fp3_format.fmt];
+ }
+ mips32_insn.fp0_format.opcode = cop1_op;
+ mips32_insn.fp0_format.fmt = fmt;
+ mips32_insn.fp0_format.ft = 0;
+ mips32_insn.fp0_format.fs = insn.mm_fp3_format.fs;
+ mips32_insn.fp0_format.fd = insn.mm_fp3_format.rt;
+ mips32_insn.fp0_format.func = func;
+ break;
+ case mm_fmov0_op:
+ case mm_fmov1_op:
+ case mm_fabs0_op:
+ case mm_fabs1_op:
+ case mm_fneg0_op:
+ case mm_fneg1_op:
+ if ((insn.mm_fp1_format.op & 0x7f) == mm_fmov0_op)
+ func = fmov_op;
+ else if ((insn.mm_fp1_format.op & 0x7f) == mm_fabs0_op)
+ func = fabs_op;
+ else
+ func = fneg_op;
+ mips32_insn.fp0_format.opcode = cop1_op;
+ mips32_insn.fp0_format.fmt = sdps_format[insn.mm_fp3_format.fmt];
+ mips32_insn.fp0_format.ft = 0;
+ mips32_insn.fp0_format.fs = insn.mm_fp3_format.fs;
+ mips32_insn.fp0_format.fd = insn.mm_fp3_format.rt;
+ mips32_insn.fp0_format.func = func;
+ break;
+ case mm_ffloorl_op:
+ case mm_ffloorw_op:
+ case mm_fceill_op:
+ case mm_fceilw_op:
+ case mm_ftruncl_op:
+ case mm_ftruncw_op:
+ case mm_froundl_op:
+ case mm_froundw_op:
+ case mm_fcvtl_op:
+ case mm_fcvtw_op:
+ if (insn.mm_fp1_format.op == mm_ffloorl_op)
+ func = ffloorl_op;
+ else if (insn.mm_fp1_format.op == mm_ffloorw_op)
+ func = ffloor_op;
+ else if (insn.mm_fp1_format.op == mm_fceill_op)
+ func = fceill_op;
+ else if (insn.mm_fp1_format.op == mm_fceilw_op)
+ func = fceil_op;
+ else if (insn.mm_fp1_format.op == mm_ftruncl_op)
+ func = ftruncl_op;
+ else if (insn.mm_fp1_format.op == mm_ftruncw_op)
+ func = ftrunc_op;
+ else if (insn.mm_fp1_format.op == mm_froundl_op)
+ func = froundl_op;
+ else if (insn.mm_fp1_format.op == mm_froundw_op)
+ func = fround_op;
+ else if (insn.mm_fp1_format.op == mm_fcvtl_op)
+ func = fcvtl_op;
+ else
+ func = fcvtw_op;
+ mips32_insn.fp0_format.opcode = cop1_op;
+ mips32_insn.fp0_format.fmt = sd_format[insn.mm_fp1_format.fmt];
+ mips32_insn.fp0_format.ft = 0;
+ mips32_insn.fp0_format.fs = insn.mm_fp1_format.fs;
+ mips32_insn.fp0_format.fd = insn.mm_fp1_format.rt;
+ mips32_insn.fp0_format.func = func;
+ break;
+ case mm_frsqrt_op:
+ case mm_fsqrt_op:
+ case mm_frecip_op:
+ if (insn.mm_fp1_format.op == mm_frsqrt_op)
+ func = frsqrt_op;
+ else if (insn.mm_fp1_format.op == mm_fsqrt_op)
+ func = fsqrt_op;
+ else
+ func = frecip_op;
+ mips32_insn.fp0_format.opcode = cop1_op;
+ mips32_insn.fp0_format.fmt = sdps_format[insn.mm_fp1_format.fmt];
+ mips32_insn.fp0_format.ft = 0;
+ mips32_insn.fp0_format.fs = insn.mm_fp1_format.fs;
+ mips32_insn.fp0_format.fd = insn.mm_fp1_format.rt;
+ mips32_insn.fp0_format.func = func;
+ break;
+ case mm_mfc1_op:
+ case mm_mtc1_op:
+ case mm_cfc1_op:
+ case mm_ctc1_op:
+ if (insn.mm_fp1_format.op == mm_mfc1_op)
+ op = mfc_op;
+ else if (insn.mm_fp1_format.op == mm_mtc1_op)
+ op = mtc_op;
+ else if (insn.mm_fp1_format.op == mm_cfc1_op)
+ op = cfc_op;
+ else
+ op = ctc_op;
+ mips32_insn.fp1_format.opcode = cop1_op;
+ mips32_insn.fp1_format.op = op;
+ mips32_insn.fp1_format.rt = insn.mm_fp1_format.rt;
+ mips32_insn.fp1_format.fs = insn.mm_fp1_format.fs;
+ mips32_insn.fp1_format.fd = 0;
+ mips32_insn.fp1_format.func = 0;
+ break;
+ default:
+ return SIGILL;
+ break;
+ }
+ break;
+ case mm_32f_74_op:    /* c.cond.fmt */
+ mips32_insn.fp0_format.opcode = cop1_op;
+ mips32_insn.fp0_format.fmt = sdps_format[insn.mm_fp4_format.fmt];
+ mips32_insn.fp0_format.ft = insn.mm_fp4_format.rt;
+ mips32_insn.fp0_format.fs = insn.mm_fp4_format.fs;
+ mips32_insn.fp0_format.fd = insn.mm_fp4_format.cc<<2;
+ mips32_insn.fp0_format.func = insn.mm_fp4_format.cond | MIPS32_COND_FC;
+ break;
+ default:
+ return SIGILL;
+ break;
+ }
+ break;
+ default:
+ return SIGILL;
+ break;
+ }
+
+ *insn_ptr = mips32_insn;
+ return 0;
+}
+
+/* micro_mips version of isBranchInstr() */
+int mm_isBranchInstr(struct pt_regs *regs, struct decoded_instn dec_insn,
+     unsigned long *contpc)
+{
+ union mips_instruction insn = (union mips_instruction)dec_insn.insn;
+ int bc_false = 0;
+ unsigned int fcr31;
+ unsigned int bit;
+
+ /* NOTE: for 16-bit instructions, they are duplicated and stored as a 32-bit value. */
+ switch (insn.mm_i_format.opcode) {
+ case mm_pool32a_op:
+ if ((insn.mm_i_format.simmediate & MM_POOL32A_MINOR_MSK) == mm_pool32axf_op) {
+ switch (insn.mm_i_format.simmediate >> MM_POOL32A_MINOR_SFT) {
+ case mm_jalr_op:
+ case mm_jalrhb_op:
+ case mm_jalrs_op:
+ case mm_jalrshb_op:
+ if (insn.mm_i_format.rt != 0)   /* not a mm_jr_op */
+ regs->regs[insn.mm_i_format.rt] = regs->cp0_epc + dec_insn.pc_inc + dec_insn.next_pc_inc;
+ *contpc = regs->regs[insn.mm_i_format.rs];
+ return 1;
+ break;
+ }
+ }
+ break;
+ case mm_pool32i_op:
+ switch (insn.mm_i_format.rt) {
+ case mm_bltzals_op:
+ case mm_bltzal_op:
+ regs->regs[31] = regs->cp0_epc + dec_insn.pc_inc + dec_insn.next_pc_inc;
+ /* Fall through */
+ case mm_bltz_op:
+ if ((long)regs->regs[insn.mm_i_format.rs] < 0)
+ *contpc = regs->cp0_epc + dec_insn.pc_inc + (insn.mm_i_format.simmediate << 1);
+ else
+ *contpc = regs->cp0_epc + dec_insn.pc_inc + dec_insn.next_pc_inc;
+ return 1;
+ break;
+ case mm_bgezals_op:
+ case mm_bgezal_op:
+ regs->regs[31] = regs->cp0_epc + dec_insn.pc_inc + dec_insn.next_pc_inc;
+ /* Fall through */
+ case mm_bgez_op:
+ if ((long)regs->regs[insn.mm_i_format.rs] >= 0)
+ *contpc = regs->cp0_epc + dec_insn.pc_inc + (insn.mm_i_format.simmediate << 1);
+ else
+ *contpc = regs->cp0_epc + dec_insn.pc_inc + dec_insn.next_pc_inc;
+ return 1;
+ break;
+ case mm_blez_op:
+ if ((long)regs->regs[insn.mm_i_format.rs] <= 0)
+ *contpc = regs->cp0_epc + dec_insn.pc_inc + (insn.mm_i_format.simmediate << 1);
+ else
+ *contpc = regs->cp0_epc + dec_insn.pc_inc + dec_insn.next_pc_inc;
+ return 1;
+ break;
+ case mm_bgtz_op:
+ if ((long)regs->regs[insn.mm_i_format.rs] <= 0)
+ *contpc = regs->cp0_epc + dec_insn.pc_inc + (insn.mm_i_format.simmediate << 1);
+ else
+ *contpc = regs->cp0_epc + dec_insn.pc_inc + dec_insn.next_pc_inc;
+ return 1;
+ break;
+ case mm_bc2f_op:
+ case mm_bc1f_op:
+ bc_false = 1;
+ /* Fall through */
+ case mm_bc2t_op:
+ case mm_bc1t_op:
+ preempt_disable();
+ if (is_fpu_owner())
+ asm volatile("cfc1\t%0,$31" : "=r" (fcr31));
+ else
+ fcr31 = current->thread.fpu.fcr31;
+ preempt_enable();
+
+ if (bc_false)
+ fcr31 = ~fcr31;
+
+ bit = (insn.mm_i_format.rs >> 2);
+ bit += (bit != 0);
+ bit += 23;
+ if (fcr31 & (1 << bit))
+ *contpc = regs->cp0_epc + dec_insn.pc_inc + (insn.mm_i_format.simmediate << 1);
+ else
+ *contpc = regs->cp0_epc + dec_insn.pc_inc + dec_insn.next_pc_inc;
+ return 1;
+ break;
+ }
+ break;
+ case mm_pool16c_op:
+ switch (insn.mm_i_format.rt) {
+ case mm_jalr16_op:
+ case mm_jalrs16_op:
+ regs->regs[31] = regs->cp0_epc + dec_insn.pc_inc + dec_insn.next_pc_inc;
+ /* Fall through */
+ case mm_jr16_op:
+ *contpc = regs->regs[insn.mm_i_format.rs];
+ return 1;
+ break;
+ }
+ break;
+ case mm_beqz16_op:
+ if ((long)regs->regs[reg16to32map[insn.mm16b1_format.rs]] == 0)
+ *contpc = regs->cp0_epc + dec_insn.pc_inc + (insn.mm16b1_format.simmediate << 1);
+ else
+ *contpc = regs->cp0_epc + dec_insn.pc_inc + dec_insn.next_pc_inc;
+ return 1;
+ break;
+ case mm_bnez16_op:
+ if ((long)regs->regs[reg16to32map[insn.mm16b1_format.rs]] != 0)
+ *contpc = regs->cp0_epc + dec_insn.pc_inc + (insn.mm16b1_format.simmediate << 1);
+ else
+ *contpc = regs->cp0_epc + dec_insn.pc_inc + dec_insn.next_pc_inc;
+ return 1;
+ break;
+ case mm_b16_op:
+ *contpc = regs->cp0_epc + dec_insn.pc_inc + (insn.mm16b0_format.simmediate << 1);
+ return 1;
+ break;
+ case mm_beq32_op:
+ if (regs->regs[insn.mm_i_format.rs] == regs->regs[insn.mm_i_format.rt])
+ *contpc = regs->cp0_epc + dec_insn.pc_inc + (insn.mm_i_format.simmediate << 1);
+ else
+ *contpc = regs->cp0_epc + dec_insn.pc_inc + dec_insn.next_pc_inc;
+ return 1;
+ break;
+ case mm_bne32_op:
+ if (regs->regs[insn.mm_i_format.rs] != regs->regs[insn.mm_i_format.rt])
+ *contpc = regs->cp0_epc + dec_insn.pc_inc + (insn.mm_i_format.simmediate << 1);
+ else
+ *contpc = regs->cp0_epc + dec_insn.pc_inc + dec_insn.next_pc_inc;
+ return 1;
+ break;
+ case mm_jalx32_op:
+ regs->regs[31] = regs->cp0_epc + dec_insn.pc_inc +
+ dec_insn.next_pc_inc;
+ *contpc = regs->cp0_epc + dec_insn.pc_inc;
+ *contpc >>= 28;
+ *contpc <<= 28;
+ *contpc |= (insn.j_format.target << 2);
+ return 1;
+ break;
+ case mm_jals32_op:
+ case mm_jal32_op:
+ regs->regs[31] = regs->cp0_epc + dec_insn.pc_inc + dec_insn.next_pc_inc;
+ /* Fall through */
+ case mm_j32_op:
+ *contpc = regs->cp0_epc + dec_insn.pc_inc;
+ *contpc >>= 27;
+ *contpc <<= 27;
+ *contpc |= (insn.j_format.target << 1);
+ *contpc |= MIPS_ISA_MODE;
+ return 1;
+ break;
+ }
+ return 0;
+}
 
 /*
  * Redundant with logic already in kernel/branch.c,
@@ -117,53 +589,134 @@ static const unsigned int fpucondbit[8] = {
  * a single subroutine should be used across both
  * modules.
  */
-static int isBranchInstr(mips_instruction * i)
+static int isBranchInstr(struct pt_regs *regs, struct decoded_instn dec_insn, unsigned long *contpc)
 {
- switch (MIPSInst_OPCODE(*i)) {
+ union mips_instruction insn = (union mips_instruction)dec_insn.insn;
+ unsigned int fcr31;
+ unsigned int bit = 0;
+
+ switch (insn.i_format.opcode) {
  case spec_op:
- switch (MIPSInst_FUNC(*i)) {
+ switch (insn.r_format.func) {
  case jalr_op:
+ regs->regs[insn.r_format.rd] = regs->cp0_epc + dec_insn.pc_inc + dec_insn.next_pc_inc;
+ /* Fall through */
  case jr_op:
+ *contpc = regs->regs[insn.r_format.rs];
  return 1;
+ break;
  }
  break;
-
  case bcond_op:
- switch (MIPSInst_RT(*i)) {
+ switch (insn.i_format.rt) {
+ case bltzal_op:
+ case bltzall_op:
+ regs->regs[31] = regs->cp0_epc + dec_insn.pc_inc + dec_insn.next_pc_inc;
+ /* Fall through */
  case bltz_op:
- case bgez_op:
  case bltzl_op:
- case bgezl_op:
- case bltzal_op:
+ if ((long)regs->regs[insn.i_format.rs] < 0)
+ *contpc = regs->cp0_epc + dec_insn.pc_inc + (insn.i_format.simmediate << 2);
+ else
+ *contpc = regs->cp0_epc + dec_insn.pc_inc + dec_insn.next_pc_inc;
+ return 1;
+ break;
  case bgezal_op:
- case bltzall_op:
  case bgezall_op:
+ regs->regs[31] = regs->cp0_epc + dec_insn.pc_inc + dec_insn.next_pc_inc;
+ /* Fall through */
+ case bgez_op:
+ case bgezl_op:
+ if ((long)regs->regs[insn.i_format.rs] >= 0)
+ *contpc = regs->cp0_epc + dec_insn.pc_inc + (insn.i_format.simmediate << 2);
+ else
+ *contpc = regs->cp0_epc + dec_insn.pc_inc + dec_insn.next_pc_inc;
  return 1;
+ break;
  }
  break;
-
- case j_op:
- case jal_op:
  case jalx_op:
+ bit = MIPS_ISA_MODE;
+ case jal_op:
+ regs->regs[31] = regs->cp0_epc + dec_insn.pc_inc + dec_insn.next_pc_inc;
+ /* Fall through */
+ case j_op:
+ *contpc = regs->cp0_epc + dec_insn.pc_inc;
+ *contpc >>= 28;
+ *contpc <<= 28;
+ *contpc |= (insn.j_format.target << 2);
+ /* set micro_mips mode bit: xor for jalx. LY22 */
+ *contpc ^= bit;
+ return 1;
+ break;
  case beq_op:
- case bne_op:
- case blez_op:
- case bgtz_op:
  case beql_op:
+ if (regs->regs[insn.i_format.rs] == regs->regs[insn.i_format.rt])
+ *contpc = regs->cp0_epc + dec_insn.pc_inc + (insn.i_format.simmediate << 2);
+ else
+ *contpc = regs->cp0_epc + dec_insn.pc_inc + dec_insn.next_pc_inc;
+ return 1;
+ break;
+ case bne_op:
  case bnel_op:
+ if (regs->regs[insn.i_format.rs] != regs->regs[insn.i_format.rt])
+ *contpc = regs->cp0_epc + dec_insn.pc_inc + (insn.i_format.simmediate << 2);
+ else
+ *contpc = regs->cp0_epc + dec_insn.pc_inc + dec_insn.next_pc_inc;
+ return 1;
+ break;
+ case blez_op:
  case blezl_op:
+ if ((long)regs->regs[insn.i_format.rs] <= 0)
+ *contpc = regs->cp0_epc + dec_insn.pc_inc + (insn.i_format.simmediate << 2);
+ else
+ *contpc = regs->cp0_epc + dec_insn.pc_inc + dec_insn.next_pc_inc;
+ return 1;
+ break;
+ case bgtz_op:
  case bgtzl_op:
+ if ((long)regs->regs[insn.i_format.rs] > 0)
+ *contpc = regs->cp0_epc + dec_insn.pc_inc + (insn.i_format.simmediate << 2);
+ else
+ *contpc = regs->cp0_epc + dec_insn.pc_inc + dec_insn.next_pc_inc;
  return 1;
-
+ break;
  case cop0_op:
  case cop1_op:
  case cop2_op:
  case cop1x_op:
- if (MIPSInst_RS(*i) == bc_op)
- return 1;
+ if (insn.i_format.rs == bc_op) {
+ preempt_disable();
+ if (is_fpu_owner())
+ asm volatile("cfc1\t%0,$31" : "=r" (fcr31));
+ else
+ fcr31 = current->thread.fpu.fcr31;
+ preempt_enable();
+
+ bit = (insn.i_format.rt >> 2);
+ bit += (bit != 0);
+ bit += 23;
+ switch (insn.i_format.rt & 3) {
+ case 0: /* bc1f */
+ case 2: /* bc1fl */
+ if (~fcr31 & (1 << bit))
+ *contpc = regs->cp0_epc + dec_insn.pc_inc + (insn.i_format.simmediate << 2);
+ else
+ *contpc = regs->cp0_epc + dec_insn.pc_inc + dec_insn.next_pc_inc;
+ return 1;
+ break;
+ case 1: /* bc1t */
+ case 3: /* bc1tl */
+ if (fcr31 & (1 << bit))
+ *contpc = regs->cp0_epc + dec_insn.pc_inc + (insn.i_format.simmediate << 2);
+ else
+ *contpc = regs->cp0_epc + dec_insn.pc_inc + dec_insn.next_pc_inc;
+ return 1;
+ break;
+ }  /* end of inner switch-statement */
+ }
  break;
  }
-
  return 0;
 }
 
@@ -210,26 +763,23 @@ static inline int cop1_64bit(struct pt_regs *xcp)
  */
 
 static int cop1Emulate(struct pt_regs *xcp, struct mips_fpu_struct *ctx,
-       void *__user *fault_addr)
+ struct decoded_instn dec_insn, void *__user *fault_addr)
 {
  mips_instruction ir;
- unsigned long emulpc, contpc;
+ unsigned long contpc = xcp->cp0_epc + dec_insn.pc_inc;
  unsigned int cond;
-
- if (!access_ok(VERIFY_READ, xcp->cp0_epc, sizeof(mips_instruction))) {
- MIPS_FPU_EMU_INC_STATS(errors);
- *fault_addr = (mips_instruction __user *)xcp->cp0_epc;
- return SIGBUS;
- }
- if (__get_user(ir, (mips_instruction __user *) xcp->cp0_epc)) {
- MIPS_FPU_EMU_INC_STATS(errors);
- *fault_addr = (mips_instruction __user *)xcp->cp0_epc;
- return SIGSEGV;
- }
+ int pc_inc;
 
  /* XXX NEC Vr54xx bug workaround */
- if ((xcp->cp0_cause & CAUSEF_BD) && !isBranchInstr(&ir))
- xcp->cp0_cause &= ~CAUSEF_BD;
+ if (xcp->cp0_cause & CAUSEF_BD) {
+ if (dec_insn.micro_mips_mode) {
+ if (!mm_isBranchInstr(xcp, dec_insn, &contpc))
+ xcp->cp0_cause &= ~CAUSEF_BD;
+ } else {
+ if (!isBranchInstr(xcp, dec_insn, &contpc))
+ xcp->cp0_cause &= ~CAUSEF_BD;
+ }
+ }
 
  if (xcp->cp0_cause & CAUSEF_BD) {
  /*
@@ -244,32 +794,27 @@ static int cop1Emulate(struct pt_regs *xcp, struct mips_fpu_struct *ctx,
  * Linux MIPS branch emulator operates on context, updating the
  * cp0_epc.
  */
- emulpc = xcp->cp0_epc + 4; /* Snapshot emulation target */
 
- if (__compute_return_epc(xcp) < 0) {
-#ifdef CP1DBG
- printk("failed to emulate branch at %p\n",
- (void *) (xcp->cp0_epc));
-#endif
- return SIGILL;
- }
- if (!access_ok(VERIFY_READ, emulpc, sizeof(mips_instruction))) {
- MIPS_FPU_EMU_INC_STATS(errors);
- *fault_addr = (mips_instruction __user *)emulpc;
- return SIGBUS;
- }
- if (__get_user(ir, (mips_instruction __user *) emulpc)) {
- MIPS_FPU_EMU_INC_STATS(errors);
- *fault_addr = (mips_instruction __user *)emulpc;
- return SIGSEGV;
- }
- /* __compute_return_epc() will have updated cp0_epc */
- contpc = xcp->cp0_epc;
- /* In order not to confuse ptrace() et al, tweak context */
- xcp->cp0_epc = emulpc - 4;
+ /* NOTE: contpc is modified by isBranchInstr() if it is a branch instr */
+
+ ir = dec_insn.next_insn;  /* process delay slot instr */
+ pc_inc = dec_insn.next_pc_inc;
  } else {
- emulpc = xcp->cp0_epc;
- contpc = xcp->cp0_epc + 4;
+ ir = dec_insn.insn;       /* process current instr */
+ pc_inc = dec_insn.pc_inc;
+ }
+
+ /* Since micro_mips FPU instructios are a subset of mips32 FPU instructions,   */
+ /* we want to convert micro_mips FPU instructions into mips32 instrunction so  */
+ /* that we could reuse all of the FPU emulation code.                          */
+ /* NOTE: we can't do this for branch instructions since they are not a subset  */
+ /*       ex: can't emulate a 16-bit aligned target address with a mips32 instn */
+ if (dec_insn.micro_mips_mode) {
+ /* if next instn is a 16-bit instn then it can't be FPU instn */
+ /* This could happen since this function can be called with non FPU instructions. */
+ if ((pc_inc == 2) ||
+ (micro_mips32_to_mips32((union mips_instruction *)&ir) == SIGILL))
+ return SIGILL;
  }
 
       emul:
@@ -474,22 +1019,30 @@ static int cop1Emulate(struct pt_regs *xcp, struct mips_fpu_struct *ctx,
  /* branch taken: emulate dslot
  * instruction
  */
- xcp->cp0_epc += 4;
- contpc = (xcp->cp0_epc +
- (MIPSInst_SIMM(ir) << 2));
-
- if (!access_ok(VERIFY_READ, xcp->cp0_epc,
-       sizeof(mips_instruction))) {
- MIPS_FPU_EMU_INC_STATS(errors);
- *fault_addr = (mips_instruction __user *)xcp->cp0_epc;
- return SIGBUS;
- }
- if (__get_user(ir,
-    (mips_instruction __user *) xcp->cp0_epc)) {
- MIPS_FPU_EMU_INC_STATS(errors);
- *fault_addr = (mips_instruction __user *)xcp->cp0_epc;
- return SIGSEGV;
- }
+ xcp->cp0_epc += dec_insn.pc_inc;
+
+ contpc = MIPSInst_SIMM(ir);
+ ir = dec_insn.next_insn;
+ if (dec_insn.micro_mips_mode) {
+ contpc = (xcp->cp0_epc + (contpc << 1));
+
+ /* if next instn is a 16-bit instn then it can't be FPU instn */
+ if ((dec_insn.next_pc_inc == 2) ||
+ (micro_mips32_to_mips32((union mips_instruction *)&ir) == SIGILL)) {
+
+ /* since this instn will be put on the stack with 32-bit words */
+ /* get around this problem by putting a NOP16 as the 2nd instn */
+ if (dec_insn.next_pc_inc == 2)
+ ir = (ir & (~0xffff)) | MM_NOP16;
+
+ /*
+ * Single step the non-cp1
+ * instruction in the dslot
+ */
+ return mips_dsemul(xcp, ir, contpc);
+ }
+ } else
+ contpc = (xcp->cp0_epc + (contpc << 2));
 
  switch (MIPSInst_OPCODE(ir)) {
  case lwc1_op:
@@ -525,8 +1078,8 @@ static int cop1Emulate(struct pt_regs *xcp, struct mips_fpu_struct *ctx,
  * branch likely nullifies
  * dslot if not taken
  */
- xcp->cp0_epc += 4;
- contpc += 4;
+ xcp->cp0_epc += dec_insn.pc_inc;
+ contpc += dec_insn.pc_inc;
  /*
  * else continue & execute
  * dslot as normal insn
@@ -1313,25 +1866,58 @@ int fpu_emulator_cop1Handler(struct pt_regs *xcp, struct mips_fpu_struct *ctx,
  int has_fpu, void *__user *fault_addr)
 {
  unsigned long oldepc, prevepc;
- mips_instruction insn;
+ struct decoded_instn dec_insn;
+ u16 instr[4];
+ u16 *instr_ptr;
  int sig = 0;
 
  oldepc = xcp->cp0_epc;
  do {
  prevepc = xcp->cp0_epc;
 
- if (!access_ok(VERIFY_READ, xcp->cp0_epc, sizeof(mips_instruction))) {
- MIPS_FPU_EMU_INC_STATS(errors);
- *fault_addr = (mips_instruction __user *)xcp->cp0_epc;
- return SIGBUS;
- }
- if (__get_user(insn, (mips_instruction __user *) xcp->cp0_epc)) {
- MIPS_FPU_EMU_INC_STATS(errors);
- *fault_addr = (mips_instruction __user *)xcp->cp0_epc;
- return SIGSEGV;
+ if (is16mode(xcp) && cpu_has_mmips) {
+ /* get the next 2 micro_mips instn and decode them into 2 mips32 instn */
+ if ((get_user(instr[0], (u16 __user *)(xcp->cp0_epc & ~MIPS_ISA_MODE))) ||
+    (get_user(instr[1], (u16 __user *)((xcp->cp0_epc+2) & ~MIPS_ISA_MODE))) ||
+    (get_user(instr[2], (u16 __user *)((xcp->cp0_epc+4) & ~MIPS_ISA_MODE))) ||
+    (get_user(instr[3], (u16 __user *)((xcp->cp0_epc+6) & ~MIPS_ISA_MODE)))) {
+ MIPS_FPU_EMU_INC_STATS(errors);
+ return SIGBUS;
+ }
+ instr_ptr = instr;
+ /* get 1st instruction */
+ if (mm_is16bit(*instr_ptr)) {
+ dec_insn.insn = (*instr_ptr << 16) | (*instr_ptr); /* duplicate the half-word */
+ dec_insn.pc_inc = 2;         /* 16 bit instr */
+ instr_ptr += 1;
+ } else {
+ dec_insn.insn = (*instr_ptr << 16) | *(instr_ptr+1);
+ dec_insn.pc_inc = 4;         /* 32 bit instr */
+ instr_ptr += 2;
+ }
+ /* get 2nd instruction */
+ if (mm_is16bit(*instr_ptr)) {
+ dec_insn.next_insn = (*instr_ptr << 16) | (*instr_ptr); /* duplicate the half-word */
+ dec_insn.next_pc_inc = 2;    /* 16 bit instr */
+ } else {
+ dec_insn.next_insn = (*instr_ptr << 16) | *(instr_ptr+1);
+ dec_insn.next_pc_inc = 4;    /* 32 bit instr */
+ }
+ dec_insn.micro_mips_mode = 1;
+ } else {
+ if ((get_user(dec_insn.insn, (mips_instruction __user *) xcp->cp0_epc)) ||
+ (get_user(dec_insn.next_insn, (mips_instruction __user *)(xcp->cp0_epc+4)))) {
+ MIPS_FPU_EMU_INC_STATS(errors);
+ return SIGBUS;
+ }
+ dec_insn.pc_inc = 4;
+ dec_insn.next_pc_inc = 4;
+ dec_insn.micro_mips_mode = 0;
  }
- if (insn == 0)
- xcp->cp0_epc += 4; /* skip nops */
+
+ if ((dec_insn.insn == 0) ||
+ ((dec_insn.pc_inc == 2) && ((dec_insn.insn & 0xffff) == MM_NOP16)))
+ xcp->cp0_epc += dec_insn.pc_inc; /* skip nops */
  else {
  /*
  * The 'ieee754_csr' is an alias of
@@ -1341,7 +1927,7 @@ int fpu_emulator_cop1Handler(struct pt_regs *xcp, struct mips_fpu_struct *ctx,
  */
  /* convert to ieee library modes */
  ieee754_csr.rm = ieee_rm[ieee754_csr.rm];
- sig = cop1Emulate(xcp, ctx, fault_addr);
+ sig = cop1Emulate(xcp, ctx, dec_insn, fault_addr);
  /* revert to mips rounding mode */
  ieee754_csr.rm = mips_rm[ieee754_csr.rm];
  }
@@ -1359,6 +1945,8 @@ int fpu_emulator_cop1Handler(struct pt_regs *xcp, struct mips_fpu_struct *ctx,
  /* but if epc has advanced, then ignore it */
  sig = 0;
 
+ /*if (sig == SIGILL) printk("Illegal micro_mips FPU instruction: 0x%x, 0x%x\n", dec_insn.insn, dec_insn.next_insn);*/
+
  return sig;
 }
 
diff --git a/arch/mips/math-emu/dsemul.c b/arch/mips/math-emu/dsemul.c
index 384a3b0..67bf6d5 100644
--- a/arch/mips/math-emu/dsemul.c
+++ b/arch/mips/math-emu/dsemul.c
@@ -54,8 +54,15 @@ int mips_dsemul(struct pt_regs *regs, mips_instruction ir, unsigned long cpc)
  extern asmlinkage void handle_dsemulret(void);
  struct emuframe __user *fr;
  int err;
+ int nop = 0;
 
- if (ir == 0) { /* a nop is easy */
+ if (regs->cp0_epc & 1) {
+ if ((ir >> 16) == MM_NOP16)
+ nop = 1;
+ } else if (ir == 0)
+ nop = 1;
+
+ if (nop == 1) { /* a nop is easy */
  regs->cp0_epc = cpc;
  regs->cp0_cause &= ~CAUSEF_BD;
  return 0;
@@ -91,8 +98,17 @@ int mips_dsemul(struct pt_regs *regs, mips_instruction ir, unsigned long cpc)
  if (unlikely(!access_ok(VERIFY_WRITE, fr, sizeof(struct emuframe))))
  return SIGBUS;
 
- err = __put_user(ir, &fr->emul);
- err |= __put_user((mips_instruction)BREAK_MATH, &fr->badinst);
+ if (regs->cp0_epc & 1) {
+ err = __put_user(ir >> 16, (u16 __user *)(&fr->emul));
+ err |= __put_user(ir & 0xffff, (u16 __user *)((long)(&fr->emul) + 2));
+ err |= __put_user(MM_BREAK_MATH >> 16, (u16 __user *)(&fr->badinst));
+ err |= __put_user(MM_BREAK_MATH & 0xffff, (u16 __user *)((long)(&fr->badinst) + 2));
+ } else {
+ err = __put_user(ir, &fr->emul);
+ err |= __put_user((mips_instruction)BREAK_MATH, &fr->badinst);
+ }
+
+ /* NOTE: assume the 2nd instn is never executed => can leave as mips32 instr */
  err |= __put_user((mips_instruction)BD_COOKIE, &fr->cookie);
  err |= __put_user(cpc, &fr->epc);
 
@@ -101,7 +117,7 @@ int mips_dsemul(struct pt_regs *regs, mips_instruction ir, unsigned long cpc)
  return SIGBUS;
  }
 
- regs->cp0_epc = (unsigned long) &fr->emul;
+ regs->cp0_epc = ((unsigned long) &fr->emul) | (regs->cp0_epc & 1);
 
  flush_cache_sigtramp((unsigned long)&fr->badinst);
 
@@ -114,9 +130,14 @@ int do_dsemulret(struct pt_regs *xcp)
  unsigned long epc;
  u32 insn, cookie;
  int err = 0;
+ u32 break_math = BREAK_MATH;
+ u16 instr[2];
+
+ if (xcp->cp0_epc & 1)
+ break_math = MM_BREAK_MATH;
 
  fr = (struct emuframe __user *)
- (xcp->cp0_epc - sizeof(mips_instruction));
+ ((xcp->cp0_epc & (~1)) - sizeof(mips_instruction));
 
  /*
  * If we can't even access the area, something is very wrong, but we'll
@@ -131,10 +152,15 @@ int do_dsemulret(struct pt_regs *xcp)
  *  - Is the instruction pointed to by the EPC an BREAK_MATH?
  *  - Is the following memory word the BD_COOKIE?
  */
- err = __get_user(insn, &fr->badinst);
+ if (xcp->cp0_epc & 1) {
+ err = __get_user(instr[0], (u16 __user *)(&fr->badinst));
+ err |= __get_user(instr[1], (u16 __user *)((long)(&fr->badinst) + 2));
+ insn = (instr[0] << 16) | instr[1];
+ } else
+ err = __get_user(insn, &fr->badinst);
  err |= __get_user(cookie, &fr->cookie);
 
- if (unlikely(err || (insn != BREAK_MATH) || (cookie != BD_COOKIE))) {
+ if (unlikely(err || (insn != break_math) || (cookie != BD_COOKIE))) {
  MIPS_FPU_EMU_INC_STATS(errors);
  return 0;
  }
--
1.7.9.5


Reply | Threaded
Open this post in threaded view
|

[PATCH v99,04/13] MIPS: microMIPS: Add support for exception handling.

Steven J. Hill-3
In reply to this post by Steven J. Hill-3
From: "Steven J. Hill" <[hidden email]>

All exceptions must be taken in microMIPS mode, never in MIPS32R2
mode or the kernel falls apart. A few 'nop' instructions are used
to maintain the correct alignment of microMIPS versions of the
exception vectors.

Signed-off-by: Steven J. Hill <[hidden email]>
---
 arch/mips/include/asm/mipsregs.h   |    1 +
 arch/mips/include/asm/stackframe.h |   12 +-
 arch/mips/kernel/cpu-probe.c       |    3 +
 arch/mips/kernel/genex.S           |   74 ++++++---
 arch/mips/kernel/scall32-o32.S     |    9 ++
 arch/mips/kernel/smtc-asm.S        |    3 +
 arch/mips/kernel/traps.c           |  290 ++++++++++++++++++++++++++----------
 arch/mips/mm/tlbex.c               |   21 +++
 arch/mips/mti-sead3/sead3-init.c   |   48 ++++++
 9 files changed, 354 insertions(+), 107 deletions(-)

diff --git a/arch/mips/include/asm/mipsregs.h b/arch/mips/include/asm/mipsregs.h
index 4b55a5a..5e9707e 100644
--- a/arch/mips/include/asm/mipsregs.h
+++ b/arch/mips/include/asm/mipsregs.h
@@ -596,6 +596,7 @@
 #define MIPS_CONF3_RXI (_ULCAST_(1) << 12)
 #define MIPS_CONF3_ULRI (_ULCAST_(1) << 13)
 #define MIPS_CONF3_ISA (_ULCAST_(3) << 14)
+#define MIPS_CONF3_ISA_OE (_ULCAST_(1) << 16)
 
 #define MIPS_CONF4_MMUSIZEEXT (_ULCAST_(255) << 0)
 #define MIPS_CONF4_MMUEXTDEF (_ULCAST_(3) << 14)
diff --git a/arch/mips/include/asm/stackframe.h b/arch/mips/include/asm/stackframe.h
index cb41af5..335ce06 100644
--- a/arch/mips/include/asm/stackframe.h
+++ b/arch/mips/include/asm/stackframe.h
@@ -139,7 +139,7 @@
 1: move ra, k0
  li k0, 3
  mtc0 k0, $22
-#endif /* CONFIG_CPU_LOONGSON2F */
+#endif /* CONFIG_CPU_JUMP_WORKAROUNDS */
 #if defined(CONFIG_32BIT) || defined(KBUILD_64BIT_SYM32)
  lui k1, %hi(kernelsp)
 #else
@@ -189,6 +189,7 @@
  LONG_S $0, PT_R0(sp)
  mfc0 v1, CP0_STATUS
  LONG_S $2, PT_R2(sp)
+ LONG_S v1, PT_STATUS(sp)
 #ifdef CONFIG_MIPS_MT_SMTC
  /*
  * Ideally, these instructions would be shuffled in
@@ -200,21 +201,20 @@
  LONG_S k0, PT_TCSTATUS(sp)
 #endif /* CONFIG_MIPS_MT_SMTC */
  LONG_S $4, PT_R4(sp)
- LONG_S $5, PT_R5(sp)
- LONG_S v1, PT_STATUS(sp)
  mfc0 v1, CP0_CAUSE
- LONG_S $6, PT_R6(sp)
- LONG_S $7, PT_R7(sp)
+ LONG_S $5, PT_R5(sp)
  LONG_S v1, PT_CAUSE(sp)
+ LONG_S $6, PT_R6(sp)
  MFC0 v1, CP0_EPC
+ LONG_S $7, PT_R7(sp)
 #ifdef CONFIG_64BIT
  LONG_S $8, PT_R8(sp)
  LONG_S $9, PT_R9(sp)
 #endif
+ LONG_S v1, PT_EPC(sp)
  LONG_S $25, PT_R25(sp)
  LONG_S $28, PT_R28(sp)
  LONG_S $31, PT_R31(sp)
- LONG_S v1, PT_EPC(sp)
  ori $28, sp, _THREAD_MASK
  xori $28, _THREAD_MASK
 #ifdef CONFIG_CPU_CAVIUM_OCTEON
diff --git a/arch/mips/kernel/cpu-probe.c b/arch/mips/kernel/cpu-probe.c
index 8db7a47..98eb036 100644
--- a/arch/mips/kernel/cpu-probe.c
+++ b/arch/mips/kernel/cpu-probe.c
@@ -442,6 +442,9 @@ static inline unsigned int decode_config3(struct cpuinfo_mips *c)
  c->options |= MIPS_CPU_ULRI;
  if (config3 & MIPS_CONF3_ISA)
  c->options |= MIPS_CPU_MICROMIPS;
+#ifdef CONFIG_CPU_MICROMIPS
+ write_c0_config3(read_c0_config3() | MIPS_CONF3_ISA_OE);
+#endif
 
  return config3 & MIPS_CONF_M;
 }
diff --git a/arch/mips/kernel/genex.S b/arch/mips/kernel/genex.S
index 8882e57..dc7d756 100644
--- a/arch/mips/kernel/genex.S
+++ b/arch/mips/kernel/genex.S
@@ -5,8 +5,8 @@
  *
  * Copyright (C) 1994 - 2000, 2001, 2003 Ralf Baechle
  * Copyright (C) 1999, 2000 Silicon Graphics, Inc.
- * Copyright (C) 2001 MIPS Technologies, Inc.
  * Copyright (C) 2002, 2007  Maciej W. Rozycki
+ * Copyright (C) 2001, 2012 MIPS Technologies, Inc.  All rights reserved.
  */
 #include <linux/init.h>
 
@@ -22,8 +22,10 @@
 #include <asm/page.h>
 #include <asm/thread_info.h>
 
+#ifdef CONFIG_MIPS_MT_SMTC
 #define PANIC_PIC(msg) \
- .set push; \
+ .set push; \
+ .set nomicromips; \
  .set reorder; \
  PTR_LA a0,8f; \
  .set noat; \
@@ -32,17 +34,10 @@
 9: b 9b; \
  .set pop; \
  TEXT(msg)
+#endif
 
  __INIT
 
-NESTED(except_vec0_generic, 0, sp)
- PANIC_PIC("Exception vector 0 called")
- END(except_vec0_generic)
-
-NESTED(except_vec1_generic, 0, sp)
- PANIC_PIC("Exception vector 1 called")
- END(except_vec1_generic)
-
 /*
  * General exception vector for all other CPUs.
  *
@@ -139,12 +134,19 @@ LEAF(r4k_wait)
  nop
  nop
  nop
+#ifdef CONFIG_CPU_MICROMIPS
+ nop
+ nop
+ nop
+ nop
+#endif
  .set mips3
  wait
  /* end of rollback region (the region size must be power of two) */
- .set pop
 1:
  jr ra
+ nop
+ .set pop
  END(r4k_wait)
 
  .macro BUILD_ROLLBACK_PROLOGUE handler
@@ -202,7 +204,11 @@ NESTED(handle_int, PT_SIZE, sp)
  LONG_L s0, TI_REGS($28)
  LONG_S sp, TI_REGS($28)
  PTR_LA ra, ret_from_irq
- j plat_irq_dispatch
+ PTR_LA  v0, plat_irq_dispatch
+ jr v0
+#ifdef CONFIG_CPU_MICROMIPS
+ nop
+#endif
  END(handle_int)
 
  __INIT
@@ -223,11 +229,14 @@ NESTED(except_vec4, 0, sp)
 /*
  * EJTAG debug exception handler.
  * The EJTAG debug exception entry point is 0xbfc00480, which
- * normally is in the boot PROM, so the boot PROM must do a
+ * normally is in the boot PROM, so the boot PROM must do an
  * unconditional jump to this vector.
  */
 NESTED(except_vec_ejtag_debug, 0, sp)
  j ejtag_debug_handler
+#ifdef CONFIG_CPU_MICROMIPS
+ nop
+#endif
  END(except_vec_ejtag_debug)
 
  __FINIT
@@ -252,9 +261,10 @@ NESTED(except_vec_vi, 0, sp)
 FEXPORT(except_vec_vi_mori)
  ori a0, $0, 0
 #endif /* CONFIG_MIPS_MT_SMTC */
+ PTR_LA v1, except_vec_vi_handler
 FEXPORT(except_vec_vi_lui)
  lui v0, 0 /* Patched */
- j except_vec_vi_handler
+ jr v1
 FEXPORT(except_vec_vi_ori)
  ori v0, 0 /* Patched */
  .set pop
@@ -355,6 +365,9 @@ EXPORT(ejtag_debug_buffer)
  */
 NESTED(except_vec_nmi, 0, sp)
  j nmi_handler
+#ifdef CONFIG_CPU_MICROMIPS
+ nop
+#endif
  END(except_vec_nmi)
 
  __FINIT
@@ -501,13 +514,36 @@ NESTED(nmi_handler, PT_SIZE, sp)
  .set push
  .set noat
  .set noreorder
- /* 0x7c03e83b: rdhwr v1,$29 */
+ /* MIPS32: 0x7c03e83b: rdhwr v1,$29 */
+ /* uMIPS:  0x007d6b3c: rdhwr v1,$29 -- in MIPS16e it is  */
+ /*         ADDIUSP $16,0x7d; LI $3,0x3c and never RI. LY22 */
  MFC0 k1, CP0_EPC
- lui k0, 0x7c03
- lw k1, (k1)
- ori k0, 0xe83b
- .set reorder
+#if defined(CONFIG_CPU_MICROMIPS) || defined(CONFIG_CPU_MIPS32_R2) || defined(CONFIG_CPU_MIPS64_R2)
+ and     k0, k1, 1
+ beqz    k0, 1f
+ xor     k1, k0
+ lhu     k0, (k1)
+ lhu     k1, 2(k1)
+ ins     k1, k0, 16, 16
+ lui     k0, 0x007d
+ b       docheck
+ ori     k0, 0x6b3c
+1:
+ lui     k0, 0x7c03
+ lw      k1, (k1)
+ ori     k0, 0xe83b
+#else
+ andi    k0, k1, 1
+ bnez    k0, handle_ri
+ lui     k0, 0x7c03
+ lw      k1, (k1)
+ ori     k0, 0xe83b
+#endif
+ .set    reorder
+docheck:
  bne k0, k1, handle_ri /* if not ours */
+
+isrdhwr:
  /* The insn is rdhwr.  No need to check CAUSE.BD here. */
  get_saved_sp /* k1 := current_thread_info */
  .set noreorder
diff --git a/arch/mips/kernel/scall32-o32.S b/arch/mips/kernel/scall32-o32.S
index 374f66e..2c0b071 100644
--- a/arch/mips/kernel/scall32-o32.S
+++ b/arch/mips/kernel/scall32-o32.S
@@ -138,9 +138,18 @@ stackargs:
 5: jr t1
  sw t5, 16(sp) # argument #5 to ksp
 
+#ifdef CONFIG_CPU_MICROMIPS
  sw t8, 28(sp) # argument #8 to ksp
+ nop
  sw t7, 24(sp) # argument #7 to ksp
+ nop
  sw t6, 20(sp) # argument #6 to ksp
+ nop
+#else
+ sw t8, 28(sp) # argument #8 to ksp
+ sw t7, 24(sp) # argument #7 to ksp
+ sw t6, 20(sp) # argument #6 to ksp
+#endif
 6: j stack_done # go back
  nop
  .set pop
diff --git a/arch/mips/kernel/smtc-asm.S b/arch/mips/kernel/smtc-asm.S
index 20938a4..8e9ae50 100644
--- a/arch/mips/kernel/smtc-asm.S
+++ b/arch/mips/kernel/smtc-asm.S
@@ -49,6 +49,9 @@ CAN WE PROVE THAT WE WON'T DO THIS IF INTS DISABLED??
  .text
  .align 5
 FEXPORT(__smtc_ipi_vector)
+#ifdef CONFIG_CPU_MICROMIPS
+ nop
+#endif
  .set noat
  /* Disable thread scheduling to make Status update atomic */
  DMT 27 # dmt k1
diff --git a/arch/mips/kernel/traps.c b/arch/mips/kernel/traps.c
index c0dc176..feccbe8 100644
--- a/arch/mips/kernel/traps.c
+++ b/arch/mips/kernel/traps.c
@@ -8,8 +8,8 @@
  * Copyright (C) 1998 Ulf Carlsson
  * Copyright (C) 1999 Silicon Graphics, Inc.
  * Kevin D. Kissell, [hidden email] and Carsten Langgaard, [hidden email]
- * Copyright (C) 2000, 01 MIPS Technologies, Inc.
  * Copyright (C) 2002, 2003, 2004, 2005, 2007  Maciej W. Rozycki
+ * Copyright (C) 2000, 2001, 2012 MIPS Technologies, Inc.  All rights reserved.
  */
 #include <linux/bug.h>
 #include <linux/compiler.h>
@@ -82,10 +82,6 @@ extern asmlinkage void handle_dsp(void);
 extern asmlinkage void handle_mcheck(void);
 extern asmlinkage void handle_reserved(void);
 
-extern int fpu_emulator_cop1Handler(struct pt_regs *xcp,
-    struct mips_fpu_struct *ctx, int has_fpu,
-    void *__user *fault_addr);
-
 void (*board_be_init)(void);
 int (*board_be_handler)(struct pt_regs *regs, int is_fixup);
 void (*board_nmi_handler_setup)(void);
@@ -491,6 +487,12 @@ asmlinkage void do_be(struct pt_regs *regs)
 #define SYNC   0x0000000f
 #define RDHWR  0x0000003b
 
+/*  microMIPS definitions   */
+#define MM_POOL32A_FUNC 0xfc00ffff
+#define MM_RDHWR        0x00006b3c
+#define MM_RS           0x001f0000
+#define MM_RT           0x03e00000
+
 /*
  * The ll_bit is cleared by r*_switch.S
  */
@@ -605,42 +607,62 @@ static int simulate_llsc(struct pt_regs *regs, unsigned int opcode)
  * Simulate trapping 'rdhwr' instructions to provide user accessible
  * registers not implemented in hardware.
  */
-static int simulate_rdhwr(struct pt_regs *regs, unsigned int opcode)
+static int simulate_rdhwr(struct pt_regs *regs, int rd, int rt)
 {
  struct thread_info *ti = task_thread_info(current);
 
+ perf_sw_event(PERF_COUNT_SW_EMULATION_FAULTS,
+ 1, regs, 0);
+ switch (rd) {
+ case 0: /* CPU number */
+ regs->regs[rt] = smp_processor_id();
+ return 0;
+ case 1: /* SYNCI length */
+ regs->regs[rt] = min(current_cpu_data.dcache.linesz,
+     current_cpu_data.icache.linesz);
+ return 0;
+ case 2: /* Read count register */
+ regs->regs[rt] = read_c0_count();
+ return 0;
+ case 3: /* Count register resolution */
+ switch (current_cpu_data.cputype) {
+ case CPU_20KC:
+ case CPU_25KF:
+ regs->regs[rt] = 1;
+ break;
+ default:
+ regs->regs[rt] = 2;
+ }
+ return 0;
+ case 29:
+ regs->regs[rt] = ti->tp_value;
+ return 0;
+ default:
+ return -1;
+ }
+}
+
+static int simulate_rdhwr_normal(struct pt_regs *regs, unsigned int opcode)
+{
  if ((opcode & OPCODE) == SPEC3 && (opcode & FUNC) == RDHWR) {
  int rd = (opcode & RD) >> 11;
  int rt = (opcode & RT) >> 16;
- perf_sw_event(PERF_COUNT_SW_EMULATION_FAULTS,
- 1, regs, 0);
- switch (rd) {
- case 0: /* CPU number */
- regs->regs[rt] = smp_processor_id();
- return 0;
- case 1: /* SYNCI length */
- regs->regs[rt] = min(current_cpu_data.dcache.linesz,
-     current_cpu_data.icache.linesz);
- return 0;
- case 2: /* Read count register */
- regs->regs[rt] = read_c0_count();
- return 0;
- case 3: /* Count register resolution */
- switch (current_cpu_data.cputype) {
- case CPU_20KC:
- case CPU_25KF:
- regs->regs[rt] = 1;
- break;
- default:
- regs->regs[rt] = 2;
- }
- return 0;
- case 29:
- regs->regs[rt] = ti->tp_value;
- return 0;
- default:
- return -1;
- }
+
+ simulate_rdhwr(regs, rd, rt);
+ return 0;
+ }
+
+ /* Not ours.  */
+ return -1;
+}
+
+static int simulate_rdhwr_mm(struct pt_regs *regs, unsigned short opcode)
+{
+ if ((opcode & MM_POOL32A_FUNC) == MM_RDHWR) {
+ int rd = (opcode & MM_RS) >> 16;
+ int rt = (opcode & MM_RT) >> 21;
+ simulate_rdhwr(regs, rd, rt);
+ return 0;
  }
 
  /* Not ours.  */
@@ -822,9 +844,29 @@ static void do_trap_or_bp(struct pt_regs *regs, unsigned int code,
 asmlinkage void do_bp(struct pt_regs *regs)
 {
  unsigned int opcode, bcode;
-
- if (__get_user(opcode, (unsigned int __user *) exception_epc(regs)))
- goto out_sigsegv;
+ unsigned long epc;
+ u16 instr[2];
+
+ if (regs->cp0_epc & MIPS_ISA_MODE) {
+ /* calc exception pc */
+ epc = exception_epc(regs);
+ if (cpu_has_mmips) {
+ if ((__get_user(instr[0], (u16 __user *)(epc & ~MIPS_ISA_MODE))) ||
+    (__get_user(instr[1], (u16 __user *)((epc+2) & ~MIPS_ISA_MODE))))
+ goto out_sigsegv;
+    opcode = (instr[0] << 16) | instr[1];
+ } else {
+    /* MIPS16e mode */
+    if (__get_user(instr[0], (u16 __user *)(epc & ~MIPS_ISA_MODE)))
+ goto out_sigsegv;
+    bcode = (instr[0] >> 6) & 0x3f;
+    do_trap_or_bp(regs, bcode, "Break");
+    return;
+ }
+ } else {
+ if (__get_user(opcode, (unsigned int __user *) exception_epc(regs)))
+ goto out_sigsegv;
+ }
 
  /*
  * There is the ancient bug in the MIPS assemblers that the break
@@ -865,13 +907,22 @@ out_sigsegv:
 asmlinkage void do_tr(struct pt_regs *regs)
 {
  unsigned int opcode, tcode = 0;
+ u16 instr[2];
+ unsigned long epc = exception_epc(regs);
 
- if (__get_user(opcode, (unsigned int __user *) exception_epc(regs)))
- goto out_sigsegv;
+ if ((__get_user(instr[0], (u16 __user *)(epc & ~MIPS_ISA_MODE))) ||
+ (__get_user(instr[1], (u16 __user *)((epc+2) & ~MIPS_ISA_MODE))))
+ goto out_sigsegv;
+ opcode = (instr[0] << 16) | instr[1];
 
  /* Immediate versions don't provide a code.  */
- if (!(opcode & OPCODE))
- tcode = ((opcode >> 6) & ((1 << 10) - 1));
+ if (!(opcode & OPCODE)) {
+ if (is16mode(regs))
+ /* microMIPS */
+ tcode = (opcode >> 12) & 0x1f;
+ else
+ tcode = ((opcode >> 6) & ((1 << 10) - 1));
+ }
 
  do_trap_or_bp(regs, tcode, "Trap");
  return;
@@ -884,6 +935,7 @@ asmlinkage void do_ri(struct pt_regs *regs)
 {
  unsigned int __user *epc = (unsigned int __user *)exception_epc(regs);
  unsigned long old_epc = regs->cp0_epc;
+ unsigned long old31 = regs->regs[31];
  unsigned int opcode = 0;
  int status = -1;
 
@@ -896,23 +948,37 @@ asmlinkage void do_ri(struct pt_regs *regs)
  if (unlikely(compute_return_epc(regs) < 0))
  return;
 
- if (unlikely(get_user(opcode, epc) < 0))
- status = SIGSEGV;
+ if (is16mode(regs)) {
+ unsigned short mmop[2] = { 0 };
 
- if (!cpu_has_llsc && status < 0)
- status = simulate_llsc(regs, opcode);
+ if (unlikely(get_user(mmop[0], epc) < 0))
+ status = SIGSEGV;
+ if (unlikely(get_user(mmop[1], epc) < 0))
+ status = SIGSEGV;
+ opcode = (mmop[0] << 16) | mmop[1];
 
- if (status < 0)
- status = simulate_rdhwr(regs, opcode);
+ if (status < 0)
+ status = simulate_rdhwr_mm(regs, opcode);
+ } else {
+ if (unlikely(get_user(opcode, epc) < 0))
+ status = SIGSEGV;
 
- if (status < 0)
- status = simulate_sync(regs, opcode);
+ if (!cpu_has_llsc && status < 0)
+ status = simulate_llsc(regs, opcode);
+
+ if (status < 0)
+ status = simulate_rdhwr_normal(regs, opcode);
+
+ if (status < 0)
+ status = simulate_sync(regs, opcode);
+ }
 
  if (status < 0)
  status = SIGILL;
 
  if (unlikely(status > 0)) {
  regs->cp0_epc = old_epc; /* Undo skip-over.  */
+ regs->regs[31] = old31;
  force_sig(status, current);
  }
 }
@@ -982,7 +1048,7 @@ static int default_cu2_call(struct notifier_block *nfb, unsigned long action,
 asmlinkage void do_cpu(struct pt_regs *regs)
 {
  unsigned int __user *epc;
- unsigned long old_epc;
+ unsigned long old_epc, old31;
  unsigned int opcode;
  unsigned int cpid;
  int status;
@@ -996,26 +1062,41 @@ asmlinkage void do_cpu(struct pt_regs *regs)
  case 0:
  epc = (unsigned int __user *)exception_epc(regs);
  old_epc = regs->cp0_epc;
+ old31 = regs->regs[31];
  opcode = 0;
  status = -1;
 
  if (unlikely(compute_return_epc(regs) < 0))
  return;
 
- if (unlikely(get_user(opcode, epc) < 0))
- status = SIGSEGV;
+ if (is16mode(regs)) {
+ unsigned short mmop[2] = { 0 };
 
- if (!cpu_has_llsc && status < 0)
- status = simulate_llsc(regs, opcode);
+ if (unlikely(get_user(mmop[0], epc) < 0))
+ status = SIGSEGV;
+ if (unlikely(get_user(mmop[1], epc) < 0))
+ status = SIGSEGV;
+ opcode = (mmop[0] << 16) | mmop[1];
 
- if (status < 0)
- status = simulate_rdhwr(regs, opcode);
+ if (status < 0)
+ status = simulate_rdhwr_mm(regs, opcode);
+ } else {
+ if (unlikely(get_user(opcode, epc) < 0))
+ status = SIGSEGV;
+
+ if (!cpu_has_llsc && status < 0)
+ status = simulate_llsc(regs, opcode);
+
+ if (status < 0)
+ status = simulate_rdhwr_normal(regs, opcode);
+ }
 
  if (status < 0)
  status = SIGILL;
 
  if (unlikely(status > 0)) {
  regs->cp0_epc = old_epc; /* Undo skip-over.  */
+ regs->regs[31] = old31;
  force_sig(status, current);
  }
 
@@ -1329,7 +1410,7 @@ asmlinkage void cache_parity_error(void)
 void ejtag_exception_handler(struct pt_regs *regs)
 {
  const int field = 2 * sizeof(unsigned long);
- unsigned long depc, old_epc;
+ unsigned long depc, old_epc, old_ra;
  unsigned int debug;
 
  printk(KERN_DEBUG "SDBBP EJTAG debug exception - not handled yet, just ignored!\n");
@@ -1344,10 +1425,12 @@ void ejtag_exception_handler(struct pt_regs *regs)
  * calculation.
  */
  old_epc = regs->cp0_epc;
+ old_ra = regs->regs[31];
  regs->cp0_epc = depc;
- __compute_return_epc(regs);
+ compute_return_epc(regs);
  depc = regs->cp0_epc;
  regs->cp0_epc = old_epc;
+ regs->regs[31] = old_ra;
  } else
  depc += 4;
  write_c0_depc(depc);
@@ -1388,9 +1471,24 @@ void __init *set_except_vector(int n, void *addr)
  unsigned long handler = (unsigned long) addr;
  unsigned long old_handler = exception_handlers[n];
 
+#ifdef CONFIG_CPU_MICROMIPS
+ /*
+ * Only the TLB handlers are cache aligned with an even
+ * address. All other handlers are on an odd address and
+ * require no modification. Otherwise, MIPS32 mode will
+ * be entered when handling any TLB exceptions. That
+ * would be bad...since we must stay in microMIPS mode.
+ */
+ if (!(handler & 0x1))
+ handler |= 1;
+#endif
  exception_handlers[n] = handler;
  if (n == 0 && cpu_has_divec) {
+#ifdef CONFIG_CPU_MICROMIPS
+ unsigned long jump_mask = ~((1 << 27) - 1);
+#else
  unsigned long jump_mask = ~((1 << 28) - 1);
+#endif
  u32 *buf = (u32 *)(ebase + 0x200);
  unsigned int k0 = 26;
  if ((handler & jump_mask) == ((ebase + 0x200) & jump_mask)) {
@@ -1417,17 +1515,18 @@ static void *set_vi_srs_handler(int n, vi_handler_t addr, int srs)
  unsigned long handler;
  unsigned long old_handler = vi_handlers[n];
  int srssets = current_cpu_data.srsets;
- u32 *w;
+ u16 *h;
  unsigned char *b;
 
  BUG_ON(!cpu_has_veic && !cpu_has_vint);
+ BUG_ON((n < 0) && (n > 9));
 
  if (addr == NULL) {
  handler = (unsigned long) do_default_vi;
  srs = 0;
  } else
  handler = (unsigned long) addr;
- vi_handlers[n] = (unsigned long) addr;
+ vi_handlers[n] = handler;
 
  b = (unsigned char *)(ebase + 0x200 + n*VECTORSPACING);
 
@@ -1446,9 +1545,8 @@ static void *set_vi_srs_handler(int n, vi_handler_t addr, int srs)
  if (srs == 0) {
  /*
  * If no shadow set is selected then use the default handler
- * that does normal register saving and a standard interrupt exit
+ * that does normal register saving and standard interrupt exit
  */
-
  extern char except_vec_vi, except_vec_vi_lui;
  extern char except_vec_vi_ori, except_vec_vi_end;
  extern char rollback_except_vec_vi;
@@ -1461,11 +1559,20 @@ static void *set_vi_srs_handler(int n, vi_handler_t addr, int srs)
  * Status.IM bit to be masked before going there.
  */
  extern char except_vec_vi_mori;
+#if defined(CONFIG_CPU_MICROMIPS) || defined(CONFIG_CPU_BIG_ENDIAN)
+ const int mori_offset = &except_vec_vi_mori - vec_start + 2;
+#else
  const int mori_offset = &except_vec_vi_mori - vec_start;
+#endif
 #endif /* CONFIG_MIPS_MT_SMTC */
- const int handler_len = &except_vec_vi_end - vec_start;
+#if defined(CONFIG_CPU_MICROMIPS) || defined(CONFIG_CPU_BIG_ENDIAN)
+ const int lui_offset = &except_vec_vi_lui - vec_start + 2;
+ const int ori_offset = &except_vec_vi_ori - vec_start + 2;
+#else
  const int lui_offset = &except_vec_vi_lui - vec_start;
  const int ori_offset = &except_vec_vi_ori - vec_start;
+#endif
+ const int handler_len = &except_vec_vi_end - vec_start;
 
  if (handler_len > VECTORSPACING) {
  /*
@@ -1475,30 +1582,44 @@ static void *set_vi_srs_handler(int n, vi_handler_t addr, int srs)
  panic("VECTORSPACING too small");
  }
 
- memcpy(b, vec_start, handler_len);
+ set_handler(((unsigned long)b - ebase), vec_start,
+#ifdef CONFIG_CPU_MICROMIPS
+ (handler_len - 1));
+#else
+ handler_len);
+#endif
 #ifdef CONFIG_MIPS_MT_SMTC
  BUG_ON(n > 7); /* Vector index %d exceeds SMTC maximum. */
 
- w = (u32 *)(b + mori_offset);
- *w = (*w & 0xffff0000) | (0x100 << n);
+ h = (u16 *)(b + mori_offset);
+ *h = (0x100 << n);
 #endif /* CONFIG_MIPS_MT_SMTC */
- w = (u32 *)(b + lui_offset);
- *w = (*w & 0xffff0000) | (((u32)handler >> 16) & 0xffff);
- w = (u32 *)(b + ori_offset);
- *w = (*w & 0xffff0000) | ((u32)handler & 0xffff);
+ h = (u16 *)(b + lui_offset);
+ *h = (handler >> 16) & 0xffff;
+ h = (u16 *)(b + ori_offset);
+ *h = (handler & 0xffff);
  local_flush_icache_range((unsigned long)b,
  (unsigned long)(b+handler_len));
  }
  else {
  /*
- * In other cases jump directly to the interrupt handler
- *
- * It is the handlers responsibility to save registers if required
- * (eg hi/lo) and return from the exception using "eret"
+ * In other cases jump directly to the interrupt handler. It
+ * is the handler's responsibility to save registers if required
+ * (eg hi/lo) and return from the exception using "eret".
  */
- w = (u32 *)b;
- *w++ = 0x08000000 | (((u32)handler >> 2) & 0x03fffff); /* j handler */
- *w = 0;
+ u32 insn;
+
+ h = (u16 *)b;
+ /* j handler */
+#ifdef CONFIG_CPU_MICROMIPS
+ insn = 0xd4000000 | (((u32)handler & 0x07ffffff) >> 1);
+#else
+ insn = 0x08000000 | (((u32)handler & 0x0fffffff) >> 2);
+#endif
+ h[0] = (insn >> 16) & 0xffff;
+ h[1] = insn & 0xffff;
+ h[2] = 0;
+ h[3] = 0;
  local_flush_icache_range((unsigned long)b,
  (unsigned long)(b+8));
  }
@@ -1657,7 +1778,11 @@ void __cpuinit per_cpu_trap_init(bool is_boot_cpu)
 /* Install CPU exception handler */
 void __cpuinit set_handler(unsigned long offset, void *addr, unsigned long size)
 {
+#ifdef CONFIG_CPU_MICROMIPS
+ memcpy((void *)(ebase + offset), ((unsigned char *)addr - 1), size);
+#else
  memcpy((void *)(ebase + offset), addr, size);
+#endif
  local_flush_icache_range(ebase + offset, ebase + offset + size);
 }
 
@@ -1691,8 +1816,9 @@ __setup("rdhwr_noopt", set_rdhwr_noopt);
 
 void __init trap_init(void)
 {
- extern char except_vec3_generic, except_vec3_r4000;
+ extern char except_vec3_generic;
  extern char except_vec4;
+ extern char except_vec3_r4000;
  unsigned long i;
  int rollback;
 
@@ -1825,11 +1951,11 @@ void __init trap_init(void)
 
  if (cpu_has_vce)
  /* Special exception: R4[04]00 uses also the divec space. */
- memcpy((void *)(ebase + 0x180), &except_vec3_r4000, 0x100);
+ set_handler(0x180, &except_vec3_r4000, 0x100);
  else if (cpu_has_4kex)
- memcpy((void *)(ebase + 0x180), &except_vec3_generic, 0x80);
+ set_handler(0x180, &except_vec3_generic, 0x80);
  else
- memcpy((void *)(ebase + 0x080), &except_vec3_generic, 0x80);
+ set_handler(0x080, &except_vec3_generic, 0x80);
 
  local_flush_icache_range(ebase, ebase + 0x400);
  flush_tlb_handlers();
diff --git a/arch/mips/mm/tlbex.c b/arch/mips/mm/tlbex.c
index 6f3d4007..cd9ad1b 100644
--- a/arch/mips/mm/tlbex.c
+++ b/arch/mips/mm/tlbex.c
@@ -2021,6 +2021,13 @@ static void __cpuinit build_r4000_tlb_load_handler(void)
 
  uasm_l_nopage_tlbl(&l, p);
  build_restore_work_registers(&p);
+#ifdef CONFIG_CPU_MICROMIPS
+ if ((unsigned long)tlb_do_page_fault_0 & 1) {
+ uasm_i_lui(&p, K0, uasm_rel_hi((long)tlb_do_page_fault_0));
+ uasm_i_addiu(&p, K0, K0, uasm_rel_lo((long)tlb_do_page_fault_0));
+ uasm_i_jr(&p, K0);
+ } else
+#endif
  uasm_i_j(&p, (unsigned long)tlb_do_page_fault_0 & 0x0fffffff);
  uasm_i_nop(&p);
 
@@ -2068,6 +2075,13 @@ static void __cpuinit build_r4000_tlb_store_handler(void)
 
  uasm_l_nopage_tlbs(&l, p);
  build_restore_work_registers(&p);
+#ifdef CONFIG_CPU_MICROMIPS
+ if ((unsigned long)tlb_do_page_fault_1 & 1) {
+ uasm_i_lui(&p, K0, uasm_rel_hi((long)tlb_do_page_fault_1));
+ uasm_i_addiu(&p, K0, K0, uasm_rel_lo((long)tlb_do_page_fault_1));
+ uasm_i_jr(&p, K0);
+ } else
+#endif
  uasm_i_j(&p, (unsigned long)tlb_do_page_fault_1 & 0x0fffffff);
  uasm_i_nop(&p);
 
@@ -2116,6 +2130,13 @@ static void __cpuinit build_r4000_tlb_modify_handler(void)
 
  uasm_l_nopage_tlbm(&l, p);
  build_restore_work_registers(&p);
+#ifdef CONFIG_CPU_MICROMIPS
+ if ((unsigned long)tlb_do_page_fault_1 & 1) {
+ uasm_i_lui(&p, K0, uasm_rel_hi((long)tlb_do_page_fault_1));
+ uasm_i_addiu(&p, K0, K0, uasm_rel_lo((long)tlb_do_page_fault_1));
+ uasm_i_jr(&p, K0);
+ } else
+#endif
  uasm_i_j(&p, (unsigned long)tlb_do_page_fault_1 & 0x0fffffff);
  uasm_i_nop(&p);
 
diff --git a/arch/mips/mti-sead3/sead3-init.c b/arch/mips/mti-sead3/sead3-init.c
index a958cad..802fce2 100644
--- a/arch/mips/mti-sead3/sead3-init.c
+++ b/arch/mips/mti-sead3/sead3-init.c
@@ -52,7 +52,41 @@ static void __init mips_nmi_setup(void)
  base = cpu_has_veic ?
  (void *)(CAC_BASE + 0xa80) :
  (void *)(CAC_BASE + 0x380);
+#ifdef CONFIG_CPU_MICROMIPS
+ /*
+ * Decrement the exception vector address by one for microMIPS.
+ */
+ memcpy(base, (&except_vec_nmi - 1), 0x80);
+
+ /*
+ * This is a hack. We do not know if the boot loader was built with
+ * microMIPS instructions or not. If it was not, the NMI exception
+ * code at 0x80000a80 will be taken in MIPS32 mode. The hand coded
+ * assembly below forces us into microMIPS mode if we are a pure
+ * microMIPS kernel. The assembly instructions are:
+ *
+ *  3C1A8000   lui       k0,0x8000
+ *  375A0381   ori       k0,k0,0x381
+ *  03400008   jr        k0
+ *  00000000   nop
+ *
+ * The mode switch occurs by jumping to the unaligned exception
+ * vector address at 0x80000381 which would have been 0x80000380
+ * in MIPS32 mode. The jump to the unaligned address transitions
+ * us into microMIPS mode.
+ */
+ if (!cpu_has_veic) {
+ void *base2 = (void *)(CAC_BASE + 0xa80);
+ *((unsigned int *)base2) = 0x3c1a8000;
+ *((unsigned int *)base2 + 1) = 0x375a0381;
+ *((unsigned int *)base2 + 2) = 0x03400008;
+ *((unsigned int *)base2 + 3) = 0x00000000;
+ flush_icache_range((unsigned long)base2,
+ (unsigned long)base2 + 0x10);
+ }
+#else
  memcpy(base, &except_vec_nmi, 0x80);
+#endif
  flush_icache_range((unsigned long)base, (unsigned long)base + 0x80);
 }
 
@@ -63,7 +97,21 @@ static void __init mips_ejtag_setup(void)
  base = cpu_has_veic ?
  (void *)(CAC_BASE + 0xa00) :
  (void *)(CAC_BASE + 0x300);
+#ifdef CONFIG_CPU_MICROMIPS
+ /* Deja vu... */
+ memcpy(base, (&except_vec_ejtag_debug - 1), 0x80);
+ if (!cpu_has_veic) {
+ void *base2 = (void *)(CAC_BASE + 0xa00);
+ *((unsigned int *)base2) = 0x3c1a8000;
+ *((unsigned int *)base2 + 1) = 0x375a0301;
+ *((unsigned int *)base2 + 2) = 0x03400008;
+ *((unsigned int *)base2 + 3) = 0x00000000;
+ flush_icache_range((unsigned long)base2,
+ (unsigned long)base2 + 0x10);
+ }
+#else
  memcpy(base, &except_vec_ejtag_debug, 0x80);
+#endif
  flush_icache_range((unsigned long)base, (unsigned long)base + 0x80);
 }
 
--
1.7.9.5


Reply | Threaded
Open this post in threaded view
|

[PATCH v99,05/13] MIPS: microMIPS: Support handling of delay slots.

Steven J. Hill-3
In reply to this post by Steven J. Hill-3
From: "Steven J. Hill" <[hidden email]>

Add logic needed to properly calculate exceptions for delay slots
when in microMIPS or MIPS16e modes.

Signed-off-by: Leonid Yegoshin <[hidden email]>
Signed-off-by: Steven J. Hill <[hidden email]>
---
 arch/mips/include/asm/branch.h |   33 +++++++-
 arch/mips/include/asm/inst.h   |    3 +
 arch/mips/kernel/branch.c      |  183 +++++++++++++++++++++++++++++++++++++++-
 arch/mips/kernel/unaligned.c   |    3 +
 4 files changed, 219 insertions(+), 3 deletions(-)

diff --git a/arch/mips/include/asm/branch.h b/arch/mips/include/asm/branch.h
index 888766a..ccc938a 100644
--- a/arch/mips/include/asm/branch.h
+++ b/arch/mips/include/asm/branch.h
@@ -16,11 +16,16 @@ static inline int delay_slot(struct pt_regs *regs)
  return regs->cp0_cause & CAUSEF_BD;
 }
 
+extern int __isa_exception_epc(struct pt_regs *regs);
+
 static inline unsigned long exception_epc(struct pt_regs *regs)
 {
- if (!delay_slot(regs))
+ if (likely(!delay_slot(regs)))
  return regs->cp0_epc;
 
+ if (is16mode(regs))
+ return __isa_exception_epc(regs);
+
  return regs->cp0_epc + 4;
 }
 
@@ -29,9 +34,20 @@ static inline unsigned long exception_epc(struct pt_regs *regs)
 extern int __compute_return_epc(struct pt_regs *regs);
 extern int __compute_return_epc_for_insn(struct pt_regs *regs,
  union mips_instruction insn);
+extern int __MIPS16e_compute_return_epc(struct pt_regs *regs);
+extern int __microMIPS_compute_return_epc(struct pt_regs *regs);
 
+/*  only for MIPS32/64 but not 16bits variants */
 static inline int compute_return_epc(struct pt_regs *regs)
 {
+ if (is16mode(regs)) {
+ if (cpu_has_mips16)
+ return __MIPS16e_compute_return_epc(regs);
+ if (cpu_has_mmips)
+ return __microMIPS_compute_return_epc(regs);
+ return regs->cp0_epc;
+ }
+
  if (!delay_slot(regs)) {
  regs->cp0_epc += 4;
  return 0;
@@ -40,4 +56,19 @@ static inline int compute_return_epc(struct pt_regs *regs)
  return __compute_return_epc(regs);
 }
 
+static inline int MIPS16e_compute_return_epc(struct pt_regs *regs,
+     union mips16e_instruction *inst)
+{
+ if (likely(!delay_slot(regs))) {
+ if (inst->ri.opcode == MIPS16e_extend_op) {
+ regs->cp0_epc += 4;
+ return 0;
+ }
+ regs->cp0_epc += 2;
+ return 0;
+ }
+
+ return __MIPS16e_compute_return_epc(regs);
+}
+
 #endif /* _ASM_BRANCH_H */
diff --git a/arch/mips/include/asm/inst.h b/arch/mips/include/asm/inst.h
index 2b2e0e3..c97b854 100644
--- a/arch/mips/include/asm/inst.h
+++ b/arch/mips/include/asm/inst.h
@@ -1122,6 +1122,9 @@ struct decoded_instn {
  int micro_mips_mode;
 };
 
+/* Recode table from MIPS16e register notation to GPR. */
+extern const int mips16e_reg2gpr[];
+
 union mips16e_instruction {
  unsigned int full:16;
  struct rr rr;
diff --git a/arch/mips/kernel/branch.c b/arch/mips/kernel/branch.c
index 4d735d0..01b4e91 100644
--- a/arch/mips/kernel/branch.c
+++ b/arch/mips/kernel/branch.c
@@ -14,10 +14,188 @@
 #include <asm/cpu.h>
 #include <asm/cpu-features.h>
 #include <asm/fpu.h>
+#include <asm/fpu_emulator.h>
 #include <asm/inst.h>
 #include <asm/ptrace.h>
 #include <asm/uaccess.h>
 
+/*
+ * Calculate and return exception epc in case of
+ * branch delay slot for microMIPS/MIPS16e
+ * It doesn't clear ISA mode bit.
+ */
+int __isa_exception_epc(struct pt_regs *regs)
+{
+ long epc;
+ union mips16e_instruction inst;
+
+ /* calc exception pc in branch delay slot */
+ epc = regs->cp0_epc;
+ if (__get_user(inst.full, (u16 __user *) (epc & ~MIPS_ISA_MODE))) {
+ /* it should never happens... because delay slot was checked */
+ force_sig(SIGSEGV, current);
+ return epc;
+ }
+ if (cpu_has_mips16) {
+ if (inst.ri.opcode == MIPS16e_jal_op)
+ epc += 4;
+ else
+ epc += 2;
+ } else if (mm_is16bit(inst.full))
+ epc += 2;
+ else
+ epc += 4;
+
+ return epc;
+}
+
+/*
+ * Compute the return address and do emulate branch simulation in MIPS16e mode,
+ * if required.
+ * After exception only - doesn't do 'compact' branch/jumps and can't be used
+ * during interrupt (compact B/J doesn't do exception)
+ */
+int __MIPS16e_compute_return_epc(struct pt_regs *regs)
+{
+ u16 __user *addr;
+ union mips16e_instruction inst;
+ u16 inst2;
+ u32 fullinst;
+ long epc;
+
+ epc = regs->cp0_epc;
+ /*
+ * Read the instruction
+ */
+ addr = (u16 __user *) (epc & ~MIPS_ISA_MODE);
+ if (__get_user(inst.full, addr)) {
+ force_sig(SIGSEGV, current);
+ return -EFAULT;
+ }
+
+ switch (inst.ri.opcode) {
+ case MIPS16e_extend_op:
+ regs->cp0_epc += 4;
+ return 0;
+
+ /*
+ *  JAL and JALX in MIPS16e mode
+ */
+ case MIPS16e_jal_op:
+ addr += 1;
+ if (__get_user(inst2, addr)) {
+ force_sig(SIGSEGV, current);
+ return -EFAULT;
+ }
+ fullinst = ((unsigned)inst.full << 16) | inst2;
+ regs->regs[31] = epc + 6;
+ epc += 4;
+ epc >>= 28;
+ epc <<= 28;
+ /*
+ * JAL:5 X:1 TARGET[20-16]:5 TARGET[25:21]:5 TARGET[15:0]:16
+ *
+ * ......TARGET[15:0].................TARGET[20:16]...........
+ * ......TARGET[25:21]
+ */
+ epc |=
+    ((fullinst & 0xffff) << 2) | ((fullinst & 0x3e00000) >> 3) |
+    ((fullinst & 0x1f0000) << 7);
+ if (!inst.jal.x)
+ epc |= MIPS_ISA_MODE; /* set ISA mode 1 */
+ regs->cp0_epc = epc;
+ return 0;
+
+ /*
+ *  J(AL)R(C)
+ */
+ case MIPS16e_rr_op:
+ if (inst.rr.func == MIPS16e_jr_func) {
+
+ if (inst.rr.ra)
+ regs->cp0_epc = regs->regs[31];
+ else
+ regs->cp0_epc =
+    regs->regs[mips16e_reg2gpr[inst.rr.rx]];
+
+ if (inst.rr.l) {
+ if (inst.rr.nd)
+ regs->regs[31] = epc + 2;
+ else
+ regs->regs[31] = epc + 4;
+ }
+ return 0;
+ }
+ break;
+ }
+
+ /* all other cases have no branch delay slot and are 16bits,
+   and branches do not do exception */
+ regs->cp0_epc += 2;
+
+ return 0;
+}
+
+/*
+ * Compute the return address and do emulate branch simulation in
+ * microMIPS mode, if required.
+ * After exception only - doesn't do 'compact' branch/jumps and can't be used
+ * during interrupt (compact B/J doesn't do exception)
+ */
+int __microMIPS_compute_return_epc(struct pt_regs *regs)
+{
+ u16 __user *pc16;
+ u16 halfword;
+ unsigned int word;
+ unsigned long contpc;
+ struct decoded_instn mminst = { 0 };
+
+ mminst.micro_mips_mode = 1;
+
+ /*
+ * This load never faults.
+ */
+ pc16 = (unsigned short __user *)(regs->cp0_epc & ~MIPS_ISA_MODE);
+ __get_user(halfword, pc16);
+ pc16++;
+ contpc = regs->cp0_epc + 2;
+ word = ((unsigned int)halfword << 16);
+ mminst.pc_inc = 2;
+
+ if (!mm_is16bit(halfword)) {
+ __get_user(halfword, pc16);
+ pc16++;
+ contpc = regs->cp0_epc + 4;
+ mminst.pc_inc = 4;
+ word |= halfword;
+ }
+ mminst.insn = word;
+
+ if (get_user(halfword, pc16))
+ goto sigsegv;
+ mminst.next_pc_inc = 2;
+ word = ((unsigned int)halfword << 16);
+
+ if (!mm_is16bit(halfword)) {
+ pc16++;
+ if (get_user(halfword, pc16))
+ goto sigsegv;
+ mminst.next_pc_inc = 4;
+ word |= halfword;
+ }
+ mminst.next_insn = word;
+
+ mm_isBranchInstr(regs, mminst, &contpc);
+
+ regs->cp0_epc = contpc;
+
+ return 0;
+
+sigsegv:
+ force_sig(SIGSEGV, current);
+ return -EFAULT;
+}
+
 /**
  * __compute_return_epc_for_insn - Computes the return address and do emulate
  *    branch simulation, if required.
@@ -57,7 +235,7 @@ int __compute_return_epc_for_insn(struct pt_regs *regs,
  */
  case bcond_op:
  switch (insn.i_format.rt) {
- case bltz_op:
+ case bltz_op:
  case bltzl_op:
  if ((long)regs->regs[insn.i_format.rs] < 0) {
  epc = epc + 4 + (insn.i_format.simmediate << 2);
@@ -129,6 +307,8 @@ int __compute_return_epc_for_insn(struct pt_regs *regs,
  epc <<= 28;
  epc |= (insn.j_format.target << 2);
  regs->cp0_epc = epc;
+ if (insn.i_format.opcode == jalx_op)
+ regs->cp0_epc |= MIPS_ISA_MODE;
  break;
 
  /*
@@ -289,5 +469,4 @@ unaligned:
  printk("%s: unaligned epc - sending SIGBUS.\n", current->comm);
  force_sig(SIGBUS, current);
  return -EFAULT;
-
 }
diff --git a/arch/mips/kernel/unaligned.c b/arch/mips/kernel/unaligned.c
index 9c58bdf..ad855db 100644
--- a/arch/mips/kernel/unaligned.c
+++ b/arch/mips/kernel/unaligned.c
@@ -102,6 +102,9 @@ static u32 unaligned_action;
 #endif
 extern void show_registers(struct pt_regs *regs);
 
+/* Recode table from MIPS16e register notation to GPR. */
+const int mips16e_reg2gpr[] = { 16, 17, 2, 3, 4, 5, 6, 7 };
+
 static void emulate_load_store_insn(struct pt_regs *regs,
  void __user *addr, unsigned int __user *pc)
 {
--
1.7.9.5


Reply | Threaded
Open this post in threaded view
|

[PATCH v99,06/13] MIPS: microMIPS: Add unaligned access support.

Steven J. Hill-3
In reply to this post by Steven J. Hill-3
From: "Steven J. Hill" <[hidden email]>

Add logic needed to properly handle unaligned accesses when in
microMIPS or MIPS16e modes.

Signed-off-by: Leonid Yegoshin <[hidden email]>
Signed-off-by: Steven J. Hill <[hidden email]>
---
 arch/mips/kernel/process.c   |  101 +++
 arch/mips/kernel/unaligned.c | 1493 ++++++++++++++++++++++++++++++++++++------
 2 files changed, 1388 insertions(+), 206 deletions(-)

diff --git a/arch/mips/kernel/process.c b/arch/mips/kernel/process.c
index 69b17a9..a64409c 100644
--- a/arch/mips/kernel/process.c
+++ b/arch/mips/kernel/process.c
@@ -7,6 +7,7 @@
  * Copyright (C) 2005, 2006 by Ralf Baechle ([hidden email])
  * Copyright (C) 1999, 2000 Silicon Graphics, Inc.
  * Copyright (C) 2004 Thiemo Seufer
+ * Copyright (C) 2012 MIPS Technologies, Inc.  All rights reserved.
  */
 #include <linux/errno.h>
 #include <linux/sched.h>
@@ -260,34 +261,115 @@ struct mips_frame_info {
 
 static inline int is_ra_save_ins(union mips_instruction *ip)
 {
+#ifdef CONFIG_CPU_MICROMIPS
+ union mips_instruction mmi;
+
+ /*
+ * swsp ra,offset
+ * swm16 reglist,offset(sp)
+ * swm32 reglist,offset(sp)
+ * sw32 ra,offset(sp)
+ * jradiussp - NOT SUPPORTED
+ *
+ * microMIPS is way more fun...
+ */
+ if (mm_is16bit(ip->halfword[0])) {
+ mmi.word = (ip->halfword[0] << 16);
+ return ((mmi.mm16_r5_format.opcode == mm_swsp16_op &&
+ mmi.mm16_r5_format.rt == 31) ||
+ (mmi.mm16_m_format.opcode == mm_pool16c_op &&
+ mmi.mm16_m_format.func == mm_swm16_op));
+ }
+ else {
+ mmi.halfword[0] = ip->halfword[1];
+ mmi.halfword[1] = ip->halfword[0];
+ return ((mmi.mm_m_format.opcode == mm_pool32b_op &&
+ mmi.mm_m_format.rd > 9 &&
+ mmi.mm_m_format.base == 29 &&
+ mmi.mm_m_format.func == mm_swm32_func) ||
+ (mmi.i_format.opcode == mm_sw32_op &&
+ mmi.i_format.rs == 29 &&
+ mmi.i_format.rt == 31));
+ }
+#else
  /* sw / sd $ra, offset($sp) */
  return (ip->i_format.opcode == sw_op || ip->i_format.opcode == sd_op) &&
  ip->i_format.rs == 29 &&
  ip->i_format.rt == 31;
+#endif
 }
 
 static inline int is_jal_jalr_jr_ins(union mips_instruction *ip)
 {
+#ifdef CONFIG_CPU_MICROMIPS
+ /*
+ * jr16,jrc,jalr16,jalr16
+ * jal
+ * jalr/jr,jalr.hb/jr.hb,jalrs,jalrs.hb
+ * jraddiusp - NOT SUPPORTED
+ *
+ * microMIPS is kind of more fun...
+ */
+ union mips_instruction mmi;
+
+ mmi.word = (ip->halfword[0] << 16);
+
+ if ((mmi.mm16_r5_format.opcode == mm_pool16c_op &&
+    (mmi.mm16_r5_format.rt & mm_jr16_op) == mm_jr16_op) ||
+    ip->j_format.opcode == mm_jal32_op)
+ return 1;
+ if (ip->r_format.opcode != mm_pool32a_op ||
+ ip->r_format.func != mm_pool32axf_op)
+ return 0;
+ return (((ip->u_format.uimmediate >> 6) & mm_jalr_op) == mm_jalr_op);
+#else
  if (ip->j_format.opcode == jal_op)
  return 1;
  if (ip->r_format.opcode != spec_op)
  return 0;
  return ip->r_format.func == jalr_op || ip->r_format.func == jr_op;
+#endif
 }
 
 static inline int is_sp_move_ins(union mips_instruction *ip)
 {
+#ifdef CONFIG_CPU_MICROMIPS
+ /*
+ * addiusp -imm
+ * addius5 sp,-imm
+ * addiu32 sp,sp,-imm
+ * jradiussp - NOT SUPPORTED
+ *
+ * microMIPS is not more fun...
+ */
+ if (mm_is16bit(ip->halfword[0])) {
+ union mips_instruction mmi;
+
+ mmi.word = (ip->halfword[0] << 16);
+ return ((mmi.mm16_r3_format.opcode == mm_pool16d_op &&
+ mmi.mm16_r3_format.simmediate && mm_addiusp_func) ||
+ (mmi.mm16_r5_format.opcode == mm_pool16d_op &&
+ mmi.mm16_r5_format.rt == 29));
+ }
+ return (ip->mm_i_format.opcode == mm_addiu32_op &&
+ ip->mm_i_format.rt == 29 && ip->mm_i_format.rs == 29);
+#else
  /* addiu/daddiu sp,sp,-imm */
  if (ip->i_format.rs != 29 || ip->i_format.rt != 29)
  return 0;
  if (ip->i_format.opcode == addiu_op || ip->i_format.opcode == daddiu_op)
  return 1;
+#endif
  return 0;
 }
 
 static int get_frame_info(struct mips_frame_info *info)
 {
+#ifdef CONFIG_CPU_MICROMIPS
+ union mips_instruction *ip = (void *) (((char *) info->func) - 1);
+#else
  union mips_instruction *ip = info->func;
+#endif
  unsigned max_insns = info->func_size / sizeof(union mips_instruction);
  unsigned i;
 
@@ -307,7 +389,26 @@ static int get_frame_info(struct mips_frame_info *info)
  break;
  if (!info->frame_size) {
  if (is_sp_move_ins(ip))
+ {
+#ifdef CONFIG_CPU_MICROMIPS
+ if (mm_is16bit(ip->halfword[0]))
+ {
+ unsigned short tmp;
+
+ if (ip->halfword[0] & mm_addiusp_func)
+ {
+ tmp = (((ip->halfword[0] >> 1) & 0x1ff) << 2);
+ info->frame_size = -(signed short)(tmp | ((tmp & 0x100) ? 0xfe00 : 0));
+ } else {
+ tmp = (ip->halfword[0] >> 1);
+ info->frame_size = -(signed short)(tmp & 0xf);
+ }
+ ip = (void *) &ip->halfword[1];
+ ip--;
+ } else
+#endif
  info->frame_size = - ip->i_format.simmediate;
+ }
  continue;
  }
  if (info->pc_offset == -1 && is_ra_save_ins(ip)) {
diff --git a/arch/mips/kernel/unaligned.c b/arch/mips/kernel/unaligned.c
index ad855db..3f5f21a 100644
--- a/arch/mips/kernel/unaligned.c
+++ b/arch/mips/kernel/unaligned.c
@@ -85,6 +85,8 @@
 #include <asm/cop2.h>
 #include <asm/inst.h>
 #include <asm/uaccess.h>
+#include <asm/fpu.h>
+#include <asm/fpu_emulator.h>
 
 #define STR(x)  __STR(x)
 #define __STR(x)  #x
@@ -105,12 +107,333 @@ extern void show_registers(struct pt_regs *regs);
 /* Recode table from MIPS16e register notation to GPR. */
 const int mips16e_reg2gpr[] = { 16, 17, 2, 3, 4, 5, 6, 7 };
 
+#ifdef __BIG_ENDIAN
+#define     LoadHW(addr, value, res)  \
+ __asm__ __volatile__ (".set\tnoat\n"        \
+ "1:\tlb\t%0, 0(%2)\n"               \
+ "2:\tlbu\t$1, 1(%2)\n\t"            \
+ "sll\t%0, 0x8\n\t"                  \
+ "or\t%0, $1\n\t"                    \
+ "li\t%1, 0\n"                       \
+ "3:\t.set\tat\n\t"                  \
+ ".insn\n\t"                         \
+ ".section\t.fixup,\"ax\"\n\t"       \
+ "4:\tli\t%1, %3\n\t"                \
+ "j\t3b\n\t"                         \
+ ".previous\n\t"                     \
+ ".section\t__ex_table,\"a\"\n\t"    \
+ STR(PTR)"\t1b, 4b\n\t"              \
+ STR(PTR)"\t2b, 4b\n\t"              \
+ ".previous"                         \
+ : "=&r" (value), "=r" (res)         \
+ : "r" (addr), "i" (-EFAULT));
+
+#define     LoadW(addr, value, res)   \
+ __asm__ __volatile__ (                      \
+ "1:\tlwl\t%0, (%2)\n"               \
+ "2:\tlwr\t%0, 3(%2)\n\t"            \
+ "li\t%1, 0\n"                       \
+ "3:\n\t"                            \
+ ".insn\n\t"                         \
+ ".section\t.fixup,\"ax\"\n\t"       \
+ "4:\tli\t%1, %3\n\t"                \
+ "j\t3b\n\t"                         \
+ ".previous\n\t"                     \
+ ".section\t__ex_table,\"a\"\n\t"    \
+ STR(PTR)"\t1b, 4b\n\t"              \
+ STR(PTR)"\t2b, 4b\n\t"              \
+ ".previous"                         \
+ : "=&r" (value), "=r" (res)         \
+ : "r" (addr), "i" (-EFAULT));
+
+#define     LoadHWU(addr, value, res) \
+ __asm__ __volatile__ (                      \
+ ".set\tnoat\n"                      \
+ "1:\tlbu\t%0, 0(%2)\n"              \
+ "2:\tlbu\t$1, 1(%2)\n\t"            \
+ "sll\t%0, 0x8\n\t"                  \
+ "or\t%0, $1\n\t"                    \
+ "li\t%1, 0\n"                       \
+ "3:\n\t"                            \
+ ".insn\n\t"                         \
+ ".set\tat\n\t"                      \
+ ".section\t.fixup,\"ax\"\n\t"       \
+ "4:\tli\t%1, %3\n\t"                \
+ "j\t3b\n\t"                         \
+ ".previous\n\t"                     \
+ ".section\t__ex_table,\"a\"\n\t"    \
+ STR(PTR)"\t1b, 4b\n\t"              \
+ STR(PTR)"\t2b, 4b\n\t"              \
+ ".previous"                         \
+ : "=&r" (value), "=r" (res)         \
+ : "r" (addr), "i" (-EFAULT));
+
+#define     LoadWU(addr, value, res)  \
+ __asm__ __volatile__ (                      \
+ "1:\tlwl\t%0, (%2)\n"               \
+ "2:\tlwr\t%0, 3(%2)\n\t"            \
+ "dsll\t%0, %0, 32\n\t"              \
+ "dsrl\t%0, %0, 32\n\t"              \
+ "li\t%1, 0\n"                       \
+ "3:\n\t"                            \
+ ".insn\n\t"                         \
+ "\t.section\t.fixup,\"ax\"\n\t"     \
+ "4:\tli\t%1, %3\n\t"                \
+ "j\t3b\n\t"                         \
+ ".previous\n\t"                     \
+ ".section\t__ex_table,\"a\"\n\t"    \
+ STR(PTR)"\t1b, 4b\n\t"              \
+ STR(PTR)"\t2b, 4b\n\t"              \
+ ".previous"                         \
+ : "=&r" (value), "=r" (res)         \
+ : "r" (addr), "i" (-EFAULT));
+
+#define     LoadDW(addr, value, res)  \
+ __asm__ __volatile__ (                      \
+ "1:\tldl\t%0, (%2)\n"               \
+ "2:\tldr\t%0, 7(%2)\n\t"            \
+ "li\t%1, 0\n"                       \
+ "3:\n\t"                            \
+ ".insn\n\t"                         \
+ "\t.section\t.fixup,\"ax\"\n\t"     \
+ "4:\tli\t%1, %3\n\t"                \
+ "j\t3b\n\t"                         \
+ ".previous\n\t"                     \
+ ".section\t__ex_table,\"a\"\n\t"    \
+ STR(PTR)"\t1b, 4b\n\t"              \
+ STR(PTR)"\t2b, 4b\n\t"              \
+ ".previous"                         \
+ : "=&r" (value), "=r" (res)         \
+ : "r" (addr), "i" (-EFAULT));
+
+#define     StoreHW(addr, value, res) \
+ __asm__ __volatile__ (                      \
+ ".set\tnoat\n"                      \
+ "1:\tsb\t%1, 1(%2)\n\t"             \
+ "srl\t$1, %1, 0x8\n"                \
+ "2:\tsb\t$1, 0(%2)\n\t"             \
+ ".set\tat\n\t"                      \
+ "li\t%0, 0\n"                       \
+ "3:\n\t"                            \
+ ".insn\n\t"                         \
+ ".section\t.fixup,\"ax\"\n\t"       \
+ "4:\tli\t%0, %3\n\t"                \
+ "j\t3b\n\t"                         \
+ ".previous\n\t"                     \
+ ".section\t__ex_table,\"a\"\n\t"    \
+ STR(PTR)"\t1b, 4b\n\t"              \
+ STR(PTR)"\t2b, 4b\n\t"              \
+ ".previous"                         \
+ : "=r" (res)                        \
+ : "r" (value), "r" (addr), "i" (-EFAULT));
+
+#define     StoreW(addr, value, res)  \
+ __asm__ __volatile__ (                      \
+ "1:\tswl\t%1,(%2)\n"                \
+ "2:\tswr\t%1, 3(%2)\n\t"            \
+ "li\t%0, 0\n"                       \
+ "3:\n\t"                            \
+ ".insn\n\t"                         \
+ ".section\t.fixup,\"ax\"\n\t"       \
+ "4:\tli\t%0, %3\n\t"                \
+ "j\t3b\n\t"                         \
+ ".previous\n\t"                     \
+ ".section\t__ex_table,\"a\"\n\t"    \
+ STR(PTR)"\t1b, 4b\n\t"              \
+ STR(PTR)"\t2b, 4b\n\t"              \
+ ".previous"                         \
+ : "=r" (res)                                \
+ : "r" (value), "r" (addr), "i" (-EFAULT));
+
+#define     StoreDW(addr, value, res) \
+ __asm__ __volatile__ (                      \
+ "1:\tsdl\t%1,(%2)\n"                \
+ "2:\tsdr\t%1, 7(%2)\n\t"            \
+ "li\t%0, 0\n"                       \
+ "3:\n\t"                            \
+ ".insn\n\t"                         \
+ ".section\t.fixup,\"ax\"\n\t"       \
+ "4:\tli\t%0, %3\n\t"                \
+ "j\t3b\n\t"                         \
+ ".previous\n\t"                     \
+ ".section\t__ex_table,\"a\"\n\t"    \
+ STR(PTR)"\t1b, 4b\n\t"              \
+ STR(PTR)"\t2b, 4b\n\t"              \
+ ".previous"                         \
+ : "=r" (res)                                \
+ : "r" (value), "r" (addr), "i" (-EFAULT));
+#endif
+
+#ifdef __LITTLE_ENDIAN
+#define     LoadHW(addr, value, res)  \
+ __asm__ __volatile__ (".set\tnoat\n"        \
+ "1:\tlb\t%0, 1(%2)\n"               \
+ "2:\tlbu\t$1, 0(%2)\n\t"            \
+ "sll\t%0, 0x8\n\t"                  \
+ "or\t%0, $1\n\t"                    \
+ "li\t%1, 0\n"                       \
+ "3:\t.set\tat\n\t"                  \
+ ".insn\n\t"                         \
+ ".section\t.fixup,\"ax\"\n\t"       \
+ "4:\tli\t%1, %3\n\t"                \
+ "j\t3b\n\t"                         \
+ ".previous\n\t"                     \
+ ".section\t__ex_table,\"a\"\n\t"    \
+ STR(PTR)"\t1b, 4b\n\t"              \
+ STR(PTR)"\t2b, 4b\n\t"              \
+ ".previous"                         \
+ : "=&r" (value), "=r" (res)         \
+ : "r" (addr), "i" (-EFAULT));
+
+#define     LoadW(addr, value, res)   \
+ __asm__ __volatile__ (                      \
+ "1:\tlwl\t%0, 3(%2)\n"              \
+ "2:\tlwr\t%0, (%2)\n\t"             \
+ "li\t%1, 0\n"                       \
+ "3:\n\t"                            \
+ ".insn\n\t"                         \
+ ".section\t.fixup,\"ax\"\n\t"       \
+ "4:\tli\t%1, %3\n\t"                \
+ "j\t3b\n\t"                         \
+ ".previous\n\t"                     \
+ ".section\t__ex_table,\"a\"\n\t"    \
+ STR(PTR)"\t1b, 4b\n\t"              \
+ STR(PTR)"\t2b, 4b\n\t"              \
+ ".previous"                         \
+ : "=&r" (value), "=r" (res)         \
+ : "r" (addr), "i" (-EFAULT));
+
+#define     LoadHWU(addr, value, res) \
+ __asm__ __volatile__ (                      \
+ ".set\tnoat\n"                      \
+ "1:\tlbu\t%0, 1(%2)\n"              \
+ "2:\tlbu\t$1, 0(%2)\n\t"            \
+ "sll\t%0, 0x8\n\t"                  \
+ "or\t%0, $1\n\t"                    \
+ "li\t%1, 0\n"                       \
+ "3:\n\t"                            \
+ ".insn\n\t"                         \
+ ".set\tat\n\t"                      \
+ ".section\t.fixup,\"ax\"\n\t"       \
+ "4:\tli\t%1, %3\n\t"                \
+ "j\t3b\n\t"                         \
+ ".previous\n\t"                     \
+ ".section\t__ex_table,\"a\"\n\t"    \
+ STR(PTR)"\t1b, 4b\n\t"              \
+ STR(PTR)"\t2b, 4b\n\t"              \
+ ".previous"                         \
+ : "=&r" (value), "=r" (res)         \
+ : "r" (addr), "i" (-EFAULT));
+
+#define     LoadWU(addr, value, res)  \
+ __asm__ __volatile__ (                      \
+ "1:\tlwl\t%0, 3(%2)\n"              \
+ "2:\tlwr\t%0, (%2)\n\t"             \
+ "dsll\t%0, %0, 32\n\t"              \
+ "dsrl\t%0, %0, 32\n\t"              \
+ "li\t%1, 0\n"                       \
+ "3:\n\t"                            \
+ ".insn\n\t"                         \
+ "\t.section\t.fixup,\"ax\"\n\t"     \
+ "4:\tli\t%1, %3\n\t"                \
+ "j\t3b\n\t"                         \
+ ".previous\n\t"                     \
+ ".section\t__ex_table,\"a\"\n\t"    \
+ STR(PTR)"\t1b, 4b\n\t"              \
+ STR(PTR)"\t2b, 4b\n\t"              \
+ ".previous"                         \
+ : "=&r" (value), "=r" (res)         \
+ : "r" (addr), "i" (-EFAULT));
+
+#define     LoadDW(addr, value, res)  \
+ __asm__ __volatile__ (                      \
+ "1:\tldl\t%0, 7(%2)\n"              \
+ "2:\tldr\t%0, (%2)\n\t"             \
+ "li\t%1, 0\n"                       \
+ "3:\n\t"                            \
+ ".insn\n\t"                         \
+ "\t.section\t.fixup,\"ax\"\n\t"     \
+ "4:\tli\t%1, %3\n\t"                \
+ "j\t3b\n\t"                         \
+ ".previous\n\t"                     \
+ ".section\t__ex_table,\"a\"\n\t"    \
+ STR(PTR)"\t1b, 4b\n\t"              \
+ STR(PTR)"\t2b, 4b\n\t"              \
+ ".previous"                         \
+ : "=&r" (value), "=r" (res)         \
+ : "r" (addr), "i" (-EFAULT));
+
+#define     StoreHW(addr, value, res) \
+ __asm__ __volatile__ (                      \
+ ".set\tnoat\n"                      \
+ "1:\tsb\t%1, 0(%2)\n\t"             \
+ "srl\t$1,%1, 0x8\n"                 \
+ "2:\tsb\t$1, 1(%2)\n\t"             \
+ ".set\tat\n\t"                      \
+ "li\t%0, 0\n"                       \
+ "3:\n\t"                            \
+ ".insn\n\t"                         \
+ ".section\t.fixup,\"ax\"\n\t"       \
+ "4:\tli\t%0, %3\n\t"                \
+ "j\t3b\n\t"                         \
+ ".previous\n\t"                     \
+ ".section\t__ex_table,\"a\"\n\t"    \
+ STR(PTR)"\t1b, 4b\n\t"              \
+ STR(PTR)"\t2b, 4b\n\t"              \
+ ".previous"                         \
+ : "=r" (res)                        \
+ : "r" (value), "r" (addr), "i" (-EFAULT));
+
+#define     StoreW(addr, value, res)  \
+ __asm__ __volatile__ (                      \
+ "1:\tswl\t%1, 3(%2)\n"              \
+ "2:\tswr\t%1, (%2)\n\t"             \
+ "li\t%0, 0\n"                       \
+ "3:\n\t"                            \
+ ".insn\n\t"                         \
+ ".section\t.fixup,\"ax\"\n\t"       \
+ "4:\tli\t%0, %3\n\t"                \
+ "j\t3b\n\t"                         \
+ ".previous\n\t"                     \
+ ".section\t__ex_table,\"a\"\n\t"    \
+ STR(PTR)"\t1b, 4b\n\t"              \
+ STR(PTR)"\t2b, 4b\n\t"              \
+ ".previous"                         \
+ : "=r" (res)                                \
+ : "r" (value), "r" (addr), "i" (-EFAULT));
+
+#define     StoreDW(addr, value, res) \
+ __asm__ __volatile__ (                      \
+ "1:\tsdl\t%1, 7(%2)\n"              \
+ "2:\tsdr\t%1, (%2)\n\t"             \
+ "li\t%0, 0\n"                       \
+ "3:\n\t"                            \
+ ".insn\n\t"                         \
+ ".section\t.fixup,\"ax\"\n\t"       \
+ "4:\tli\t%0, %3\n\t"                \
+ "j\t3b\n\t"                         \
+ ".previous\n\t"                     \
+ ".section\t__ex_table,\"a\"\n\t"    \
+ STR(PTR)"\t1b, 4b\n\t"              \
+ STR(PTR)"\t2b, 4b\n\t"              \
+ ".previous"                         \
+ : "=r" (res)                                \
+ : "r" (value), "r" (addr), "i" (-EFAULT));
+#endif
+
 static void emulate_load_store_insn(struct pt_regs *regs,
- void __user *addr, unsigned int __user *pc)
+    void __user *addr,
+    unsigned int __user *pc)
 {
  union mips_instruction insn;
  unsigned long value;
  unsigned int res;
+ unsigned long origpc;
+ unsigned long orig31;
+ void __user *fault_addr = NULL;
+
+ origpc = (unsigned long)pc;
+ orig31 = regs->regs[31];
 
  perf_sw_event(PERF_COUNT_SW_EMULATION_FAULTS, 1, regs, 0);
 
@@ -120,22 +443,22 @@ static void emulate_load_store_insn(struct pt_regs *regs,
  __get_user(insn.word, pc);
 
  switch (insn.i_format.opcode) {
- /*
- * These are instructions that a compiler doesn't generate.  We
- * can assume therefore that the code is MIPS-aware and
- * really buggy.  Emulating these instructions would break the
- * semantics anyway.
- */
+ /*
+ * These are instructions that a compiler doesn't generate.  We
+ * can assume therefore that the code is MIPS-aware and
+ * really buggy.  Emulating these instructions would break the
+ * semantics anyway.
+ */
  case ll_op:
  case lld_op:
  case sc_op:
  case scd_op:
 
- /*
- * For these instructions the only way to create an address
- * error is an attempted access to kernel/supervisor address
- * space.
- */
+ /*
+ * For these instructions the only way to create an address
+ * error is an attempted access to kernel/supervisor address
+ * space.
+ */
  case ldl_op:
  case ldr_op:
  case lwl_op:
@@ -149,36 +472,15 @@ static void emulate_load_store_insn(struct pt_regs *regs,
  case sb_op:
  goto sigbus;
 
- /*
- * The remaining opcodes are the ones that are really of interest.
- */
+ /*
+ * The remaining opcodes are the ones that are really of
+ * interest.
+ */
  case lh_op:
  if (!access_ok(VERIFY_READ, addr, 2))
  goto sigbus;
 
- __asm__ __volatile__ (".set\tnoat\n"
-#ifdef __BIG_ENDIAN
- "1:\tlb\t%0, 0(%2)\n"
- "2:\tlbu\t$1, 1(%2)\n\t"
-#endif
-#ifdef __LITTLE_ENDIAN
- "1:\tlb\t%0, 1(%2)\n"
- "2:\tlbu\t$1, 0(%2)\n\t"
-#endif
- "sll\t%0, 0x8\n\t"
- "or\t%0, $1\n\t"
- "li\t%1, 0\n"
- "3:\t.set\tat\n\t"
- ".section\t.fixup,\"ax\"\n\t"
- "4:\tli\t%1, %3\n\t"
- "j\t3b\n\t"
- ".previous\n\t"
- ".section\t__ex_table,\"a\"\n\t"
- STR(PTR)"\t1b, 4b\n\t"
- STR(PTR)"\t2b, 4b\n\t"
- ".previous"
- : "=&r" (value), "=r" (res)
- : "r" (addr), "i" (-EFAULT));
+ LoadHW(addr, value, res);
  if (res)
  goto fault;
  compute_return_epc(regs);
@@ -189,26 +491,7 @@ static void emulate_load_store_insn(struct pt_regs *regs,
  if (!access_ok(VERIFY_READ, addr, 4))
  goto sigbus;
 
- __asm__ __volatile__ (
-#ifdef __BIG_ENDIAN
- "1:\tlwl\t%0, (%2)\n"
- "2:\tlwr\t%0, 3(%2)\n\t"
-#endif
-#ifdef __LITTLE_ENDIAN
- "1:\tlwl\t%0, 3(%2)\n"
- "2:\tlwr\t%0, (%2)\n\t"
-#endif
- "li\t%1, 0\n"
- "3:\t.section\t.fixup,\"ax\"\n\t"
- "4:\tli\t%1, %3\n\t"
- "j\t3b\n\t"
- ".previous\n\t"
- ".section\t__ex_table,\"a\"\n\t"
- STR(PTR)"\t1b, 4b\n\t"
- STR(PTR)"\t2b, 4b\n\t"
- ".previous"
- : "=&r" (value), "=r" (res)
- : "r" (addr), "i" (-EFAULT));
+ LoadW(addr, value, res);
  if (res)
  goto fault;
  compute_return_epc(regs);
@@ -219,30 +502,7 @@ static void emulate_load_store_insn(struct pt_regs *regs,
  if (!access_ok(VERIFY_READ, addr, 2))
  goto sigbus;
 
- __asm__ __volatile__ (
- ".set\tnoat\n"
-#ifdef __BIG_ENDIAN
- "1:\tlbu\t%0, 0(%2)\n"
- "2:\tlbu\t$1, 1(%2)\n\t"
-#endif
-#ifdef __LITTLE_ENDIAN
- "1:\tlbu\t%0, 1(%2)\n"
- "2:\tlbu\t$1, 0(%2)\n\t"
-#endif
- "sll\t%0, 0x8\n\t"
- "or\t%0, $1\n\t"
- "li\t%1, 0\n"
- "3:\t.set\tat\n\t"
- ".section\t.fixup,\"ax\"\n\t"
- "4:\tli\t%1, %3\n\t"
- "j\t3b\n\t"
- ".previous\n\t"
- ".section\t__ex_table,\"a\"\n\t"
- STR(PTR)"\t1b, 4b\n\t"
- STR(PTR)"\t2b, 4b\n\t"
- ".previous"
- : "=&r" (value), "=r" (res)
- : "r" (addr), "i" (-EFAULT));
+ LoadHWU(addr, value, res);
  if (res)
  goto fault;
  compute_return_epc(regs);
@@ -261,28 +521,7 @@ static void emulate_load_store_insn(struct pt_regs *regs,
  if (!access_ok(VERIFY_READ, addr, 4))
  goto sigbus;
 
- __asm__ __volatile__ (
-#ifdef __BIG_ENDIAN
- "1:\tlwl\t%0, (%2)\n"
- "2:\tlwr\t%0, 3(%2)\n\t"
-#endif
-#ifdef __LITTLE_ENDIAN
- "1:\tlwl\t%0, 3(%2)\n"
- "2:\tlwr\t%0, (%2)\n\t"
-#endif
- "dsll\t%0, %0, 32\n\t"
- "dsrl\t%0, %0, 32\n\t"
- "li\t%1, 0\n"
- "3:\t.section\t.fixup,\"ax\"\n\t"
- "4:\tli\t%1, %3\n\t"
- "j\t3b\n\t"
- ".previous\n\t"
- ".section\t__ex_table,\"a\"\n\t"
- STR(PTR)"\t1b, 4b\n\t"
- STR(PTR)"\t2b, 4b\n\t"
- ".previous"
- : "=&r" (value), "=r" (res)
- : "r" (addr), "i" (-EFAULT));
+ LoadWU(addr, value, res);
  if (res)
  goto fault;
  compute_return_epc(regs);
@@ -305,26 +544,7 @@ static void emulate_load_store_insn(struct pt_regs *regs,
  if (!access_ok(VERIFY_READ, addr, 8))
  goto sigbus;
 
- __asm__ __volatile__ (
-#ifdef __BIG_ENDIAN
- "1:\tldl\t%0, (%2)\n"
- "2:\tldr\t%0, 7(%2)\n\t"
-#endif
-#ifdef __LITTLE_ENDIAN
- "1:\tldl\t%0, 7(%2)\n"
- "2:\tldr\t%0, (%2)\n\t"
-#endif
- "li\t%1, 0\n"
- "3:\t.section\t.fixup,\"ax\"\n\t"
- "4:\tli\t%1, %3\n\t"
- "j\t3b\n\t"
- ".previous\n\t"
- ".section\t__ex_table,\"a\"\n\t"
- STR(PTR)"\t1b, 4b\n\t"
- STR(PTR)"\t2b, 4b\n\t"
- ".previous"
- : "=&r" (value), "=r" (res)
- : "r" (addr), "i" (-EFAULT));
+ LoadDW(addr, value, res);
  if (res)
  goto fault;
  compute_return_epc(regs);
@@ -339,68 +559,22 @@ static void emulate_load_store_insn(struct pt_regs *regs,
  if (!access_ok(VERIFY_WRITE, addr, 2))
  goto sigbus;
 
+ compute_return_epc(regs);
  value = regs->regs[insn.i_format.rt];
- __asm__ __volatile__ (
-#ifdef __BIG_ENDIAN
- ".set\tnoat\n"
- "1:\tsb\t%1, 1(%2)\n\t"
- "srl\t$1, %1, 0x8\n"
- "2:\tsb\t$1, 0(%2)\n\t"
- ".set\tat\n\t"
-#endif
-#ifdef __LITTLE_ENDIAN
- ".set\tnoat\n"
- "1:\tsb\t%1, 0(%2)\n\t"
- "srl\t$1,%1, 0x8\n"
- "2:\tsb\t$1, 1(%2)\n\t"
- ".set\tat\n\t"
-#endif
- "li\t%0, 0\n"
- "3:\n\t"
- ".section\t.fixup,\"ax\"\n\t"
- "4:\tli\t%0, %3\n\t"
- "j\t3b\n\t"
- ".previous\n\t"
- ".section\t__ex_table,\"a\"\n\t"
- STR(PTR)"\t1b, 4b\n\t"
- STR(PTR)"\t2b, 4b\n\t"
- ".previous"
- : "=r" (res)
- : "r" (value), "r" (addr), "i" (-EFAULT));
+ StoreHW(addr, value, res);
  if (res)
  goto fault;
- compute_return_epc(regs);
  break;
 
  case sw_op:
  if (!access_ok(VERIFY_WRITE, addr, 4))
  goto sigbus;
 
+ compute_return_epc(regs);
  value = regs->regs[insn.i_format.rt];
- __asm__ __volatile__ (
-#ifdef __BIG_ENDIAN
- "1:\tswl\t%1,(%2)\n"
- "2:\tswr\t%1, 3(%2)\n\t"
-#endif
-#ifdef __LITTLE_ENDIAN
- "1:\tswl\t%1, 3(%2)\n"
- "2:\tswr\t%1, (%2)\n\t"
-#endif
- "li\t%0, 0\n"
- "3:\n\t"
- ".section\t.fixup,\"ax\"\n\t"
- "4:\tli\t%0, %3\n\t"
- "j\t3b\n\t"
- ".previous\n\t"
- ".section\t__ex_table,\"a\"\n\t"
- STR(PTR)"\t1b, 4b\n\t"
- STR(PTR)"\t2b, 4b\n\t"
- ".previous"
- : "=r" (res)
- : "r" (value), "r" (addr), "i" (-EFAULT));
+ StoreW(addr, value, res);
  if (res)
  goto fault;
- compute_return_epc(regs);
  break;
 
  case sd_op:
@@ -415,31 +589,11 @@ static void emulate_load_store_insn(struct pt_regs *regs,
  if (!access_ok(VERIFY_WRITE, addr, 8))
  goto sigbus;
 
+ compute_return_epc(regs);
  value = regs->regs[insn.i_format.rt];
- __asm__ __volatile__ (
-#ifdef __BIG_ENDIAN
- "1:\tsdl\t%1,(%2)\n"
- "2:\tsdr\t%1, 7(%2)\n\t"
-#endif
-#ifdef __LITTLE_ENDIAN
- "1:\tsdl\t%1, 7(%2)\n"
- "2:\tsdr\t%1, (%2)\n\t"
-#endif
- "li\t%0, 0\n"
- "3:\n\t"
- ".section\t.fixup,\"ax\"\n\t"
- "4:\tli\t%0, %3\n\t"
- "j\t3b\n\t"
- ".previous\n\t"
- ".section\t__ex_table,\"a\"\n\t"
- STR(PTR)"\t1b, 4b\n\t"
- STR(PTR)"\t2b, 4b\n\t"
- ".previous"
- : "=r" (res)
- : "r" (value), "r" (addr), "i" (-EFAULT));
+ StoreDW(addr, value, res);
  if (res)
  goto fault;
- compute_return_epc(regs);
  break;
 #endif /* CONFIG_64BIT */
 
@@ -450,10 +604,21 @@ static void emulate_load_store_insn(struct pt_regs *regs,
  case ldc1_op:
  case swc1_op:
  case sdc1_op:
- /*
- * I herewith declare: this does not happen.  So send SIGBUS.
- */
- goto sigbus;
+ die_if_kernel("Unaligned FP access in kernel code", regs);
+ BUG_ON(!used_math());
+ BUG_ON(!is_fpu_owner());
+
+ lose_fpu(1); /* save the FPU state for the emulator */
+ res = fpu_emulator_cop1Handler(regs, &current->thread.fpu, 1,
+       &fault_addr);
+ own_fpu(1); /* restore FPU state */
+
+ /* If something went wrong, signal */
+ process_fpemu_return(res, fault_addr);
+
+ if (res == 0)
+ break;
+ return;
 
  /*
  * COP2 is available to implementor for application specific use.
@@ -491,6 +656,9 @@ static void emulate_load_store_insn(struct pt_regs *regs,
  return;
 
 fault:
+ /* roll back jump/branch */
+ regs->cp0_epc = origpc;
+ regs->regs[31] = orig31;
  /* Did we have an exception handler installed? */
  if (fixup_exception(regs))
  return;
@@ -507,7 +675,879 @@ sigbus:
  return;
 
 sigill:
- die_if_kernel("Unhandled kernel unaligned access or invalid instruction", regs);
+ die_if_kernel
+    ("Unhandled kernel unaligned access or invalid instruction", regs);
+ force_sig(SIGILL, current);
+}
+
+/*  recode table from micromips register notation to GPR */
+static int mmreg16to32[] = { 16, 17, 2, 3, 4, 5, 6, 7 };
+
+/*  recode table from micromips STORE register notation to GPR */
+static int mmreg16to32_st[] = { 0, 17, 2, 3, 4, 5, 6, 7 };
+
+void emulate_load_store_microMIPS(struct pt_regs *regs, void __user * addr)
+{
+ unsigned long value;
+ unsigned int res;
+ int i;
+ unsigned int reg = 0, rvar;
+ unsigned long orig31;
+ u16 __user *pc16;
+ u16 halfword;
+ unsigned int word;
+ unsigned long origpc, contpc;
+ union mips_instruction insn;
+ struct decoded_instn mminst;
+ void __user *fault_addr = NULL;
+
+ origpc = regs->cp0_epc;
+ orig31 = regs->regs[31];
+
+ mminst.micro_mips_mode = 1;
+
+ /*
+ * This load never faults.
+ */
+ pc16 = (unsigned short __user *)(regs->cp0_epc & ~MIPS_ISA_MODE);
+ __get_user(halfword, pc16);
+ pc16++;
+ contpc = regs->cp0_epc + 2;
+ word = ((unsigned int)halfword << 16);
+ mminst.pc_inc = 2;
+
+ if (!mm_is16bit(halfword)) {
+ __get_user(halfword, pc16);
+ pc16++;
+ contpc = regs->cp0_epc + 4;
+ mminst.pc_inc = 4;
+ word |= halfword;
+ }
+ mminst.insn = word;
+
+ if (get_user(halfword, pc16))
+ goto fault;
+ mminst.next_pc_inc = 2;
+ word = ((unsigned int)halfword << 16);
+
+ if (!mm_is16bit(halfword)) {
+ pc16++;
+ if (get_user(halfword, pc16))
+ goto fault;
+ mminst.next_pc_inc = 4;
+ word |= halfword;
+ }
+ mminst.next_insn = word;
+
+ insn = (union mips_instruction)(mminst.insn);
+ if (mm_isBranchInstr(regs, mminst, &contpc))
+ insn = (union mips_instruction)(mminst.next_insn);
+
+ /*  Parse instruction to find what to do */
+
+ switch (insn.mm_i_format.opcode) {
+
+ case mm_pool32a_op:
+ switch (insn.mm_x_format.func) {
+ case mm_lwxs_op:
+ reg = insn.mm_x_format.rd;
+ goto loadW;
+ }
+
+ goto sigbus;
+
+ case mm_pool32b_op:
+ switch (insn.mm_m_format.func) {
+ case mm_lwp_func:
+ reg = insn.mm_m_format.rd;
+ if (reg == 31)
+ goto sigbus;
+
+ if (!access_ok(VERIFY_READ, addr, 8))
+ goto sigbus;
+
+ LoadW(addr, value, res);
+ if (res)
+ goto fault;
+ regs->regs[reg] = value;
+ addr += 4;
+ LoadW(addr, value, res);
+ if (res)
+ goto fault;
+ regs->regs[reg + 1] = value;
+ goto success;
+
+ case mm_swp_func:
+ reg = insn.mm_m_format.rd;
+ if (reg == 31)
+ goto sigbus;
+
+ if (!access_ok(VERIFY_WRITE, addr, 8))
+ goto sigbus;
+
+ value = regs->regs[reg];
+ StoreW(addr, value, res);
+ if (res)
+ goto fault;
+ addr += 4;
+ value = regs->regs[reg + 1];
+ StoreW(addr, value, res);
+ if (res)
+ goto fault;
+ goto success;
+
+ case mm_ldp_func:
+#ifdef CONFIG_64BIT
+ reg = insn.mm_m_format.rd;
+ if (reg == 31)
+ goto sigbus;
+
+ if (!access_ok(VERIFY_READ, addr, 16))
+ goto sigbus;
+
+ LoadDW(addr, value, res);
+ if (res)
+ goto fault;
+ regs->regs[reg] = value;
+ addr += 8;
+ LoadDW(addr, value, res);
+ if (res)
+ goto fault;
+ regs->regs[reg + 1] = value;
+ goto success;
+#endif /* CONFIG_64BIT */
+
+ goto sigill;
+
+ case mm_sdp_func:
+#ifdef CONFIG_64BIT
+ reg = insn.mm_m_format.rd;
+ if (reg == 31)
+ goto sigbus;
+
+ if (!access_ok(VERIFY_WRITE, addr, 16))
+ goto sigbus;
+
+ value = regs->regs[reg];
+ StoreDW(addr, value, res);
+ if (res)
+ goto fault;
+ addr += 8;
+ value = regs->regs[reg + 1];
+ StoreDW(addr, value, res);
+ if (res)
+ goto fault;
+ goto success;
+#endif /* CONFIG_64BIT */
+
+ goto sigill;
+
+ case mm_lwm32_func:
+ reg = insn.mm_m_format.rd;
+ rvar = reg & 0xf;
+ if ((rvar > 9) || !reg)
+ goto sigill;
+ if (reg & 0x10) {
+ if (!access_ok
+    (VERIFY_READ, addr, 4 * (rvar + 1)))
+ goto sigbus;
+ } else {
+ if (!access_ok(VERIFY_READ, addr, 4 * rvar))
+ goto sigbus;
+ }
+ if (rvar == 9)
+ rvar = 8;
+ for (i = 16; rvar; rvar--, i++) {
+ LoadW(addr, value, res);
+ if (res)
+ goto fault;
+ addr += 4;
+ regs->regs[i] = value;
+ }
+ if ((reg & 0xf) == 9) {
+ LoadW(addr, value, res);
+ if (res)
+ goto fault;
+ addr += 4;
+ regs->regs[30] = value;
+ }
+ if (reg & 0x10) {
+ LoadW(addr, value, res);
+ if (res)
+ goto fault;
+ regs->regs[31] = value;
+ }
+ goto success;
+
+ case mm_swm32_func:
+ reg = insn.mm_m_format.rd;
+ rvar = reg & 0xf;
+ if ((rvar > 9) || !reg)
+ goto sigill;
+ if (reg & 0x10) {
+ if (!access_ok
+    (VERIFY_WRITE, addr, 4 * (rvar + 1)))
+ goto sigbus;
+ } else {
+ if (!access_ok(VERIFY_WRITE, addr, 4 * rvar))
+ goto sigbus;
+ }
+ if (rvar == 9)
+ rvar = 8;
+ for (i = 16; rvar; rvar--, i++) {
+ value = regs->regs[i];
+ StoreW(addr, value, res);
+ if (res)
+ goto fault;
+ addr += 4;
+ }
+ if ((reg & 0xf) == 9) {
+ value = regs->regs[30];
+ StoreW(addr, value, res);
+ if (res)
+ goto fault;
+ addr += 4;
+ }
+ if (reg & 0x10) {
+ value = regs->regs[31];
+ StoreW(addr, value, res);
+ if (res)
+ goto fault;
+ }
+ goto success;
+
+ case mm_ldm_func:
+#ifdef CONFIG_64BIT
+ reg = insn.mm_m_format.rd;
+ rvar = reg & 0xf;
+ if ((rvar > 9) || !reg)
+ goto sigill;
+ if (reg & 0x10) {
+ if (!access_ok
+    (VERIFY_READ, addr, 8 * (rvar + 1)))
+ goto sigbus;
+ } else {
+ if (!access_ok(VERIFY_READ, addr, 8 * rvar))
+ goto sigbus;
+ }
+ if (rvar == 9)
+ rvar = 8;
+
+ for (i = 16; rvar; rvar--, i++) {
+ LoadDW(addr, value, res);
+ if (res)
+ goto fault;
+ addr += 4;
+ regs->regs[i] = value;
+ }
+ if ((reg & 0xf) == 9) {
+ LoadDW(addr, value, res);
+ if (res)
+ goto fault;
+ addr += 8;
+ regs->regs[30] = value;
+ }
+ if (reg & 0x10) {
+ LoadDW(addr, value, res);
+ if (res)
+ goto fault;
+ regs->regs[31] = value;
+ }
+ goto success;
+#endif /* CONFIG_64BIT */
+
+ goto sigill;
+
+ case mm_sdm_func:
+#ifdef CONFIG_64BIT
+ reg = insn.mm_m_format.rd;
+ rvar = reg & 0xf;
+ if ((rvar > 9) || !reg)
+ goto sigill;
+ if (reg & 0x10) {
+ if (!access_ok
+    (VERIFY_WRITE, addr, 8 * (rvar + 1)))
+ goto sigbus;
+ } else {
+ if (!access_ok(VERIFY_WRITE, addr, 8 * rvar))
+ goto sigbus;
+ }
+ if (rvar == 9)
+ rvar = 8;
+
+ for (i = 16; rvar; rvar--, i++) {
+ value = regs->regs[i];
+ StoreDW(addr, value, res);
+ if (res)
+ goto fault;
+ addr += 8;
+ }
+ if ((reg & 0xf) == 9) {
+ value = regs->regs[30];
+ StoreDW(addr, value, res);
+ if (res)
+ goto fault;
+ addr += 8;
+ }
+ if (reg & 0x10) {
+ value = regs->regs[31];
+ StoreDW(addr, value, res);
+ if (res)
+ goto fault;
+ }
+ goto success;
+#endif /* CONFIG_64BIT */
+
+ goto sigill;
+
+ /*  LWC2, SWC2, LDC2, SDC2 are not serviced */
+ }
+
+ goto sigbus;
+
+ case mm_pool32c_op:
+ switch (insn.mm_m_format.func) {
+ case mm_lwu_func:
+ reg = insn.mm_m_format.rd;
+ goto loadWU;
+ }
+
+ /*  LL,SC,LLD,SCD are not serviced */
+ goto sigbus;
+
+ case mm_pool32f_op:
+ switch (insn.mm_x_format.func) {
+ case mm_lwxc1_func:
+ case mm_swxc1_func:
+ case mm_ldxc1_func:
+ case mm_sdxc1_func:
+ goto fpu_emul;
+ }
+
+ goto sigbus;
+
+ case mm_ldc132_op:
+ case mm_sdc132_op:
+ case mm_lwc132_op:
+ case mm_swc132_op:
+fpu_emul:
+ /* roll back jump/branch */
+ regs->cp0_epc = origpc;
+ regs->regs[31] = orig31;
+
+ die_if_kernel("Unaligned FP access in kernel code", regs);
+ BUG_ON(!used_math());
+ BUG_ON(!is_fpu_owner());
+
+ lose_fpu(1); /* save the FPU state for the emulator */
+ res = fpu_emulator_cop1Handler(regs, &current->thread.fpu, 1,
+       &fault_addr);
+ own_fpu(1); /* restore FPU state */
+
+ /* If something went wrong, signal */
+ process_fpemu_return(res, fault_addr);
+
+ if (res == 0)
+ goto success;
+ return;
+
+ case mm_lh32_op:
+ reg = insn.mm_i_format.rt;
+ goto loadHW;
+
+ case mm_lhu32_op:
+ reg = insn.mm_i_format.rt;
+ goto loadHWU;
+
+ case mm_lw32_op:
+ reg = insn.mm_i_format.rt;
+ goto loadW;
+
+ case mm_sh32_op:
+ reg = insn.mm_i_format.rt;
+ goto storeHW;
+
+ case mm_sw32_op:
+ reg = insn.mm_i_format.rt;
+ goto storeW;
+
+ case mm_ld32_op:
+ reg = insn.mm_i_format.rt;
+ goto loadDW;
+
+ case mm_sd32_op:
+ reg = insn.mm_i_format.rt;
+ goto storeDW;
+
+ case mm_pool16c_op:
+ switch (insn.mm16_m_format.func) {
+ case mm_lwm16_op:
+ reg = insn.mm16_m_format.rlist;
+ rvar = reg + 1;
+ if (!access_ok(VERIFY_READ, addr, 4 * rvar))
+ goto sigbus;
+
+ for (i = 16; rvar; rvar--, i++) {
+ LoadW(addr, value, res);
+ if (res)
+ goto fault;
+ addr += 4;
+ regs->regs[i] = value;
+ }
+ LoadW(addr, value, res);
+ if (res)
+ goto fault;
+ regs->regs[31] = value;
+
+ goto success;
+
+ case mm_swm16_op:
+ reg = insn.mm16_m_format.rlist;
+ rvar = reg + 1;
+ if (!access_ok(VERIFY_WRITE, addr, 4 * rvar))
+ goto sigbus;
+
+ for (i = 16; rvar; rvar--, i++) {
+ value = regs->regs[i];
+ StoreW(addr, value, res);
+ if (res)
+ goto fault;
+ addr += 4;
+ }
+ value = regs->regs[31];
+ StoreW(addr, value, res);
+ if (res)
+ goto fault;
+
+ goto success;
+
+ }
+
+ goto sigbus;
+
+ case mm_lhu16_op:
+ reg = mmreg16to32[insn.mm16_rb_format.rt];
+ goto loadHWU;
+
+ case mm_lw16_op:
+ reg = mmreg16to32[insn.mm16_rb_format.rt];
+ goto loadW;
+
+ case mm_sh16_op:
+ reg = mmreg16to32_st[insn.mm16_rb_format.rt];
+ goto storeHW;
+
+ case mm_sw16_op:
+ reg = mmreg16to32_st[insn.mm16_rb_format.rt];
+ goto storeW;
+
+ case mm_lwsp16_op:
+ reg = insn.mm16_r5_format.rt;
+ goto loadW;
+
+ case mm_swsp16_op:
+ reg = insn.mm16_r5_format.rt;
+ goto storeW;
+
+ case mm_lwgp16_op:
+ reg = mmreg16to32[insn.mm16_r3_format.rt];
+ goto loadW;
+
+ default:
+ goto sigill;
+ }
+
+loadHW:
+ if (!access_ok(VERIFY_READ, addr, 2))
+ goto sigbus;
+
+ LoadHW(addr, value, res);
+ if (res)
+ goto fault;
+ regs->regs[reg] = value;
+ goto success;
+
+loadHWU:
+ if (!access_ok(VERIFY_READ, addr, 2))
+ goto sigbus;
+
+ LoadHWU(addr, value, res);
+ if (res)
+ goto fault;
+ regs->regs[reg] = value;
+ goto success;
+
+loadW:
+ if (!access_ok(VERIFY_READ, addr, 4))
+ goto sigbus;
+
+ LoadW(addr, value, res);
+ if (res)
+ goto fault;
+ regs->regs[reg] = value;
+ goto success;
+
+loadWU:
+#ifdef CONFIG_64BIT
+ /*
+ * A 32-bit kernel might be running on a 64-bit processor.  But
+ * if we're on a 32-bit processor and an i-cache incoherency
+ * or race makes us see a 64-bit instruction here the sdl/sdr
+ * would blow up, so for now we don't handle unaligned 64-bit
+ * instructions on 32-bit kernels.
+ */
+ if (!access_ok(VERIFY_READ, addr, 4))
+ goto sigbus;
+
+ LoadWU(addr, value, res);
+ if (res)
+ goto fault;
+ regs->regs[reg] = value;
+ goto success;
+#endif /* CONFIG_64BIT */
+
+ /* Cannot handle 64-bit instructions in 32-bit kernel */
+ goto sigill;
+
+loadDW:
+#ifdef CONFIG_64BIT
+ /*
+ * A 32-bit kernel might be running on a 64-bit processor.  But
+ * if we're on a 32-bit processor and an i-cache incoherency
+ * or race makes us see a 64-bit instruction here the sdl/sdr
+ * would blow up, so for now we don't handle unaligned 64-bit
+ * instructions on 32-bit kernels.
+ */
+ if (!access_ok(VERIFY_READ, addr, 8))
+ goto sigbus;
+
+ LoadDW(addr, value, res);
+ if (res)
+ goto fault;
+ regs->regs[reg] = value;
+ goto success;
+#endif /* CONFIG_64BIT */
+
+ /* Cannot handle 64-bit instructions in 32-bit kernel */
+ goto sigill;
+
+storeHW:
+ if (!access_ok(VERIFY_WRITE, addr, 2))
+ goto sigbus;
+
+ value = regs->regs[reg];
+ StoreHW(addr, value, res);
+ if (res)
+ goto fault;
+ goto success;
+
+storeW:
+ if (!access_ok(VERIFY_WRITE, addr, 4))
+ goto sigbus;
+
+ value = regs->regs[reg];
+ StoreW(addr, value, res);
+ if (res)
+ goto fault;
+ goto success;
+
+storeDW:
+#ifdef CONFIG_64BIT
+ /*
+ * A 32-bit kernel might be running on a 64-bit processor.  But
+ * if we're on a 32-bit processor and an i-cache incoherency
+ * or race makes us see a 64-bit instruction here the sdl/sdr
+ * would blow up, so for now we don't handle unaligned 64-bit
+ * instructions on 32-bit kernels.
+ */
+ if (!access_ok(VERIFY_WRITE, addr, 8))
+ goto sigbus;
+
+ value = regs->regs[reg];
+ StoreDW(addr, value, res);
+ if (res)
+ goto fault;
+ goto success;
+#endif /* CONFIG_64BIT */
+
+ /* Cannot handle 64-bit instructions in 32-bit kernel */
+ goto sigill;
+
+success:
+ regs->cp0_epc = contpc; /* advance or branch */
+
+#ifdef CONFIG_DEBUG_FS
+ unaligned_instructions++;
+#endif
+ return;
+
+fault:
+ /* roll back jump/branch */
+ regs->cp0_epc = origpc;
+ regs->regs[31] = orig31;
+ /* Did we have an exception handler installed? */
+ if (fixup_exception(regs))
+ return;
+
+ die_if_kernel("Unhandled kernel unaligned access", regs);
+ force_sig(SIGSEGV, current);
+
+ return;
+
+sigbus:
+ die_if_kernel("Unhandled kernel unaligned access", regs);
+ force_sig(SIGBUS, current);
+
+ return;
+
+sigill:
+ die_if_kernel
+    ("Unhandled kernel unaligned access or invalid instruction", regs);
+ force_sig(SIGILL, current);
+}
+
+static void emulate_load_store_MIPS16e(struct pt_regs *regs, void __user * addr)
+{
+ unsigned long value;
+ unsigned int res;
+ int reg;
+ unsigned long orig31;
+ u16 __user *pc16;
+ unsigned long origpc;
+ union mips16e_instruction mips16inst, oldinst;
+
+ origpc = regs->cp0_epc;
+ orig31 = regs->regs[31];
+ pc16 = (unsigned short __user *)(origpc & ~MIPS_ISA_MODE);
+ /*
+ * This load never faults.
+ */
+ __get_user(mips16inst.full, pc16);
+ oldinst = mips16inst;
+
+ /* skip EXTEND instruction */
+ if (mips16inst.ri.opcode == MIPS16e_extend_op) {
+ pc16++;
+ __get_user(mips16inst.full, pc16);
+ } else if (delay_slot(regs)) {
+ /*  skip jump instructions */
+ /*  JAL/JALX are 32 bits but have OPCODE in first short int */
+ if (mips16inst.ri.opcode == MIPS16e_jal_op)
+ pc16++;
+ pc16++;
+ if (get_user(mips16inst.full, pc16))
+ goto sigbus;
+ }
+
+ switch (mips16inst.ri.opcode) {
+ case MIPS16e_i64_op: /* I64 or RI64 instruction */
+ switch (mips16inst.i64.func) { /* I64/RI64 func field check */
+ case MIPS16e_ldpc_func:
+ case MIPS16e_ldsp_func:
+ reg = mips16e_reg2gpr[mips16inst.ri64.ry];
+ goto loadDW;
+
+ case MIPS16e_sdsp_func:
+ reg = mips16e_reg2gpr[mips16inst.ri64.ry];
+ goto writeDW;
+
+ case MIPS16e_sdrasp_func:
+ reg = 29; /* GPRSP */
+ goto writeDW;
+ }
+
+ goto sigbus;
+
+ case MIPS16e_swsp_op:
+ case MIPS16e_lwpc_op:
+ case MIPS16e_lwsp_op:
+ reg = mips16e_reg2gpr[mips16inst.ri.rx];
+ break;
+
+ case MIPS16e_i8_op:
+ if (mips16inst.i8.func != MIPS16e_swrasp_func)
+ goto sigbus;
+ reg = 29; /* GPRSP */
+ break;
+
+ default:
+ reg = mips16e_reg2gpr[mips16inst.rri.ry];
+ break;
+ }
+
+ switch (mips16inst.ri.opcode) {
+
+ case MIPS16e_lb_op:
+ case MIPS16e_lbu_op:
+ case MIPS16e_sb_op:
+ goto sigbus;
+
+ case MIPS16e_lh_op:
+ if (!access_ok(VERIFY_READ, addr, 2))
+ goto sigbus;
+
+ LoadHW(addr, value, res);
+ if (res)
+ goto fault;
+ MIPS16e_compute_return_epc(regs, &oldinst);
+ regs->regs[reg] = value;
+ break;
+
+ case MIPS16e_lhu_op:
+ if (!access_ok(VERIFY_READ, addr, 2))
+ goto sigbus;
+
+ LoadHWU(addr, value, res);
+ if (res)
+ goto fault;
+ MIPS16e_compute_return_epc(regs, &oldinst);
+ regs->regs[reg] = value;
+ break;
+
+ case MIPS16e_lw_op:
+ case MIPS16e_lwpc_op:
+ case MIPS16e_lwsp_op:
+ if (!access_ok(VERIFY_READ, addr, 4))
+ goto sigbus;
+
+ LoadW(addr, value, res);
+ if (res)
+ goto fault;
+ MIPS16e_compute_return_epc(regs, &oldinst);
+ regs->regs[reg] = value;
+ break;
+
+ case MIPS16e_lwu_op:
+#ifdef CONFIG_64BIT
+ /*
+ * A 32-bit kernel might be running on a 64-bit processor.  But
+ * if we're on a 32-bit processor and an i-cache incoherency
+ * or race makes us see a 64-bit instruction here the sdl/sdr
+ * would blow up, so for now we don't handle unaligned 64-bit
+ * instructions on 32-bit kernels.
+ */
+ if (!access_ok(VERIFY_READ, addr, 4))
+ goto sigbus;
+
+ LoadWU(addr, value, res);
+ if (res)
+ goto fault;
+ MIPS16e_compute_return_epc(regs, &oldinst);
+ regs->regs[reg] = value;
+ break;
+#endif /* CONFIG_64BIT */
+
+ /* Cannot handle 64-bit instructions in 32-bit kernel */
+ goto sigill;
+
+ case MIPS16e_ld_op:
+loadDW:
+#ifdef CONFIG_64BIT
+ /*
+ * A 32-bit kernel might be running on a 64-bit processor.  But
+ * if we're on a 32-bit processor and an i-cache incoherency
+ * or race makes us see a 64-bit instruction here the sdl/sdr
+ * would blow up, so for now we don't handle unaligned 64-bit
+ * instructions on 32-bit kernels.
+ */
+ if (!access_ok(VERIFY_READ, addr, 8))
+ goto sigbus;
+
+ LoadDW(addr, value, res);
+ if (res)
+ goto fault;
+ MIPS16e_compute_return_epc(regs, &oldinst);
+ regs->regs[reg] = value;
+ break;
+#endif /* CONFIG_64BIT */
+
+ /* Cannot handle 64-bit instructions in 32-bit kernel */
+ goto sigill;
+
+ case MIPS16e_sh_op:
+ if (!access_ok(VERIFY_WRITE, addr, 2))
+ goto sigbus;
+
+ MIPS16e_compute_return_epc(regs, &oldinst);
+ value = regs->regs[reg];
+ StoreHW(addr, value, res);
+ if (res)
+ goto fault;
+ break;
+
+ case MIPS16e_sw_op:
+ case MIPS16e_swsp_op:
+ case MIPS16e_i8_op: /* actually - MIPS16e_swrasp_func */
+ if (!access_ok(VERIFY_WRITE, addr, 4))
+ goto sigbus;
+
+ MIPS16e_compute_return_epc(regs, &oldinst);
+ value = regs->regs[reg];
+ StoreW(addr, value, res);
+ if (res)
+ goto fault;
+ break;
+
+ case MIPS16e_sd_op:
+writeDW:
+#ifdef CONFIG_64BIT
+ /*
+ * A 32-bit kernel might be running on a 64-bit processor.  But
+ * if we're on a 32-bit processor and an i-cache incoherency
+ * or race makes us see a 64-bit instruction here the sdl/sdr
+ * would blow up, so for now we don't handle unaligned 64-bit
+ * instructions on 32-bit kernels.
+ */
+ if (!access_ok(VERIFY_WRITE, addr, 8))
+ goto sigbus;
+
+ MIPS16e_compute_return_epc(regs, &oldinst);
+ value = regs->regs[reg];
+ StoreDW(addr, value, res);
+ if (res)
+ goto fault;
+ break;
+#endif /* CONFIG_64BIT */
+
+ /* Cannot handle 64-bit instructions in 32-bit kernel */
+ goto sigill;
+
+ default:
+ /*
+ * Pheeee...  We encountered an yet unknown instruction or
+ * cache coherence problem.  Die sucker, die ...
+ */
+ goto sigill;
+ }
+
+#ifdef CONFIG_DEBUG_FS
+ unaligned_instructions++;
+#endif
+
+ return;
+
+fault:
+ /* roll back jump/branch */
+ regs->cp0_epc = origpc;
+ regs->regs[31] = orig31;
+ /* Did we have an exception handler installed? */
+ if (fixup_exception(regs))
+ return;
+
+ die_if_kernel("Unhandled kernel unaligned access", regs);
+ force_sig(SIGSEGV, current);
+
+ return;
+
+sigbus:
+ die_if_kernel("Unhandled kernel unaligned access", regs);
+ force_sig(SIGBUS, current);
+
+ return;
+
+sigill:
+ die_if_kernel
+    ("Unhandled kernel unaligned access or invalid instruction", regs);
  force_sig(SIGILL, current);
 }
 
@@ -520,23 +1560,64 @@ asmlinkage void do_ade(struct pt_regs *regs)
  1, regs, regs->cp0_badvaddr);
  /*
  * Did we catch a fault trying to load an instruction?
- * Or are we running in MIPS16 mode?
  */
- if ((regs->cp0_badvaddr == regs->cp0_epc) || (regs->cp0_epc & 0x1))
+ if (regs->cp0_badvaddr == regs->cp0_epc)
  goto sigbus;
 
- pc = (unsigned int __user *) exception_epc(regs);
  if (user_mode(regs) && !test_thread_flag(TIF_FIXADE))
  goto sigbus;
  if (unaligned_action == UNALIGNED_ACTION_SIGNAL)
  goto sigbus;
- else if (unaligned_action == UNALIGNED_ACTION_SHOW)
- show_registers(regs);
 
  /*
  * Do branch emulation only if we didn't forward the exception.
  * This is all so but ugly ...
  */
+
+ /*
+ * Are we running in MIPS16e/microMIPS mode?
+ */
+ if (is16mode(regs)) {
+ /*
+ * Did we catch a fault trying to load an instruction in
+ * 16bit mode?
+ */
+ if (regs->cp0_badvaddr == (regs->cp0_epc & ~MIPS_ISA_MODE))
+ goto sigbus;
+ if (unaligned_action == UNALIGNED_ACTION_SHOW)
+ show_registers(regs);
+
+ if (cpu_has_mips16) {
+ seg = get_fs();
+ if (!user_mode(regs))
+ set_fs(KERNEL_DS);
+ emulate_load_store_MIPS16e(regs,
+   (void __user *)regs->
+   cp0_badvaddr);
+ set_fs(seg);
+
+ return;
+ }
+
+ if (cpu_has_mmips) { /* micromips unaligned access */
+ seg = get_fs();
+ if (!user_mode(regs))
+ set_fs(KERNEL_DS);
+ emulate_load_store_microMIPS(regs,
+     (void __user *)regs->
+     cp0_badvaddr);
+ set_fs(seg);
+
+ return;
+ }
+
+ goto sigbus;
+ }
+
+ if (unaligned_action == UNALIGNED_ACTION_SHOW)
+ show_registers(regs);
+ pc = (unsigned int __user *)exception_epc(regs);
+
  seg = get_fs();
  if (!user_mode(regs))
  set_fs(KERNEL_DS);
--
1.7.9.5


Reply | Threaded
Open this post in threaded view
|

[PATCH v99,07/13] MIPS: microMIPS: Add vdso support.

Steven J. Hill-3
In reply to this post by Steven J. Hill-3
From: Douglas Leung <[hidden email]>

Support vdso in microMIPS mode.

Signed-off-by: Steven J. Hill <[hidden email]>
---
 arch/mips/kernel/signal.c |    6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/arch/mips/kernel/signal.c b/arch/mips/kernel/signal.c
index b6aa770..3dc23cd 100644
--- a/arch/mips/kernel/signal.c
+++ b/arch/mips/kernel/signal.c
@@ -35,6 +35,7 @@
 #include <asm/war.h>
 #include <asm/vdso.h>
 #include <asm/dsp.h>
+#include <asm/inst.h>
 
 #include "signal-common.h"
 
@@ -518,7 +519,12 @@ static void handle_signal(unsigned long sig, siginfo_t *info,
  sigset_t *oldset = sigmask_to_save();
  int ret;
  struct mips_abi *abi = current->thread.abi;
+#ifdef CONFIG_CPU_MICROMIPS
+ void *vdso = (void *)
+ ((unsigned int)current->mm->context.vdso | MIPS_ISA_MODE);
+#else
  void *vdso = current->mm->context.vdso;
+#endif
 
  if (regs->regs[0]) {
  switch(regs->regs[2]) {
--
1.7.9.5


Reply | Threaded
Open this post in threaded view
|

[PATCH v99,08/13] MIPS: microMIPS: Add configuration option for microMIPS kernel.

Steven J. Hill-3
In reply to this post by Steven J. Hill-3
From: "Steven J. Hill" <[hidden email]>

This adds the option to build the Linux kernel using only the
microMIPS ISA. The resulting kernel binary is, at a minimum,
20% smaller than using the MIPS32R2 ISA.

Signed-off-by: Steven J. Hill <[hidden email]>
---
 arch/mips/Kconfig                      |   11 +++
 arch/mips/Makefile                     |    1 +
 arch/mips/configs/sead3micro_defconfig |  125 ++++++++++++++++++++++++++++++++
 arch/mips/kernel/proc.c                |    4 +
 4 files changed, 141 insertions(+)
 create mode 100644 arch/mips/configs/sead3micro_defconfig

diff --git a/arch/mips/Kconfig b/arch/mips/Kconfig
index 99c3ad7..07f2dff 100644
--- a/arch/mips/Kconfig
+++ b/arch/mips/Kconfig
@@ -350,6 +350,7 @@ config MIPS_SEAD3
  select SYS_SUPPORTS_BIG_ENDIAN
  select SYS_SUPPORTS_LITTLE_ENDIAN
  select SYS_SUPPORTS_SMARTMIPS
+ select SYS_SUPPORTS_MICROMIPS
  select USB_ARCH_HAS_EHCI
  select USB_EHCI_BIG_ENDIAN_DESC
  select USB_EHCI_BIG_ENDIAN_MMIO
@@ -2087,6 +2088,13 @@ config CPU_HAS_SMARTMIPS
   you don't know you probably don't have SmartMIPS and should say N
   here.
 
+config CPU_MICROMIPS
+ depends on SYS_SUPPORTS_MICROMIPS
+ bool "Build kernel using microMIPS ISA"
+ help
+  When this option is enabled the kernel will be built using the
+  microMIPS ISA
+
 config CPU_HAS_WB
  bool
 
@@ -2149,6 +2157,9 @@ config SYS_SUPPORTS_HIGHMEM
 config SYS_SUPPORTS_SMARTMIPS
  bool
 
+config SYS_SUPPORTS_MICROMIPS
+ bool
+
 config ARCH_FLATMEM_ENABLE
  def_bool y
  depends on !NUMA && !CPU_LOONGSON2
diff --git a/arch/mips/Makefile b/arch/mips/Makefile
index 654b1ad..6f829de6 100644
--- a/arch/mips/Makefile
+++ b/arch/mips/Makefile
@@ -114,6 +114,7 @@ cflags-$(CONFIG_CPU_BIG_ENDIAN) += $(shell $(CC) -dumpmachine |grep -q 'mips.*e
 cflags-$(CONFIG_CPU_LITTLE_ENDIAN) += $(shell $(CC) -dumpmachine |grep -q 'mips.*el-.*' || echo -EL $(undef-all) $(predef-le))
 
 cflags-$(CONFIG_CPU_HAS_SMARTMIPS) += $(call cc-option,-msmartmips)
+cflags-$(CONFIG_CPU_MICROMIPS) += $(call cc-option,-mmicromips -mno-jals)
 
 cflags-$(CONFIG_SB1XXX_CORELIS) += $(call cc-option,-mno-sched-prolog) \
    -fno-omit-frame-pointer
diff --git a/arch/mips/configs/sead3micro_defconfig b/arch/mips/configs/sead3micro_defconfig
new file mode 100644
index 0000000..403332f
--- /dev/null
+++ b/arch/mips/configs/sead3micro_defconfig
@@ -0,0 +1,125 @@
+CONFIG_MIPS_SEAD3=y
+CONFIG_CPU_LITTLE_ENDIAN=y
+CONFIG_CPU_MIPS32_R2=y
+CONFIG_CPU_MICROMIPS=y
+CONFIG_HZ_100=y
+CONFIG_EXPERIMENTAL=y
+CONFIG_SYSVIPC=y
+CONFIG_POSIX_MQUEUE=y
+CONFIG_NO_HZ=y
+CONFIG_HIGH_RES_TIMERS=y
+CONFIG_IKCONFIG=y
+CONFIG_IKCONFIG_PROC=y
+CONFIG_LOG_BUF_SHIFT=15
+CONFIG_EMBEDDED=y
+CONFIG_SLAB=y
+CONFIG_PROFILING=y
+CONFIG_OPROFILE=y
+CONFIG_MODULES=y
+# CONFIG_BLK_DEV_BSG is not set
+# CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS is not set
+CONFIG_NET=y
+CONFIG_PACKET=y
+CONFIG_UNIX=y
+CONFIG_INET=y
+CONFIG_IP_PNP=y
+CONFIG_IP_PNP_DHCP=y
+CONFIG_IP_PNP_BOOTP=y
+# CONFIG_INET_XFRM_MODE_TRANSPORT is not set
+# CONFIG_INET_XFRM_MODE_TUNNEL is not set
+# CONFIG_INET_XFRM_MODE_BEET is not set
+# CONFIG_INET_LRO is not set
+# CONFIG_INET_DIAG is not set
+# CONFIG_IPV6 is not set
+# CONFIG_WIRELESS is not set
+CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug"
+CONFIG_MTD=y
+CONFIG_MTD_CHAR=y
+CONFIG_MTD_BLOCK=y
+CONFIG_MTD_CFI=y
+CONFIG_MTD_CFI_INTELEXT=y
+CONFIG_MTD_PHYSMAP=y
+CONFIG_MTD_UBI=y
+CONFIG_MTD_UBI_GLUEBI=y
+CONFIG_BLK_DEV_LOOP=y
+CONFIG_BLK_DEV_CRYPTOLOOP=m
+CONFIG_SCSI=y
+# CONFIG_SCSI_PROC_FS is not set
+CONFIG_BLK_DEV_SD=y
+CONFIG_CHR_DEV_SG=y
+# CONFIG_SCSI_LOWLEVEL is not set
+CONFIG_NETDEVICES=y
+CONFIG_SMSC911X=y
+# CONFIG_NET_VENDOR_WIZNET is not set
+CONFIG_MARVELL_PHY=y
+CONFIG_DAVICOM_PHY=y
+CONFIG_QSEMI_PHY=y
+CONFIG_LXT_PHY=y
+CONFIG_CICADA_PHY=y
+CONFIG_VITESSE_PHY=y
+CONFIG_SMSC_PHY=y
+CONFIG_BROADCOM_PHY=y
+CONFIG_ICPLUS_PHY=y
+# CONFIG_WLAN is not set
+# CONFIG_INPUT_MOUSEDEV is not set
+# CONFIG_INPUT_KEYBOARD is not set
+# CONFIG_INPUT_MOUSE is not set
+# CONFIG_SERIO is not set
+# CONFIG_CONSOLE_TRANSLATIONS is not set
+CONFIG_VT_HW_CONSOLE_BINDING=y
+CONFIG_LEGACY_PTY_COUNT=32
+CONFIG_SERIAL_8250=y
+CONFIG_SERIAL_8250_CONSOLE=y
+CONFIG_SERIAL_8250_NR_UARTS=2
+CONFIG_SERIAL_8250_RUNTIME_UARTS=2
+# CONFIG_HW_RANDOM is not set
+CONFIG_I2C=y
+# CONFIG_I2C_COMPAT is not set
+CONFIG_I2C_CHARDEV=y
+# CONFIG_I2C_HELPER_AUTO is not set
+CONFIG_SPI=y
+CONFIG_SENSORS_ADT7475=y
+CONFIG_BACKLIGHT_LCD_SUPPORT=y
+CONFIG_LCD_CLASS_DEVICE=y
+CONFIG_BACKLIGHT_CLASS_DEVICE=y
+# CONFIG_VGA_CONSOLE is not set
+CONFIG_USB=y
+CONFIG_USB_ANNOUNCE_NEW_DEVICES=y
+CONFIG_USB_EHCI_HCD=y
+CONFIG_USB_EHCI_ROOT_HUB_TT=y
+CONFIG_USB_STORAGE=y
+CONFIG_MMC=y
+CONFIG_MMC_DEBUG=y
+CONFIG_MMC_SPI=y
+CONFIG_NEW_LEDS=y
+CONFIG_LEDS_CLASS=y
+CONFIG_LEDS_TRIGGERS=y
+CONFIG_LEDS_TRIGGER_HEARTBEAT=y
+CONFIG_RTC_CLASS=y
+CONFIG_RTC_DRV_M41T80=y
+CONFIG_EXT3_FS=y
+# CONFIG_EXT3_DEFAULTS_TO_ORDERED is not set
+CONFIG_XFS_FS=y
+CONFIG_XFS_QUOTA=y
+CONFIG_XFS_POSIX_ACL=y
+CONFIG_QUOTA=y
+# CONFIG_PRINT_QUOTA_WARNING is not set
+CONFIG_MSDOS_FS=m
+CONFIG_VFAT_FS=m
+CONFIG_TMPFS=y
+CONFIG_JFFS2_FS=y
+CONFIG_NFS_FS=y
+CONFIG_ROOT_NFS=y
+CONFIG_NLS_CODEPAGE_437=y
+CONFIG_NLS_ASCII=y
+CONFIG_NLS_ISO8859_1=y
+CONFIG_NLS_ISO8859_15=y
+CONFIG_NLS_UTF8=y
+# CONFIG_FTRACE is not set
+CONFIG_CRYPTO=y
+CONFIG_CRYPTO_CBC=y
+CONFIG_CRYPTO_ECB=y
+CONFIG_CRYPTO_AES=y
+CONFIG_CRYPTO_ARC4=y
+# CONFIG_CRYPTO_ANSI_CPRNG is not set
+# CONFIG_CRYPTO_HW is not set
diff --git a/arch/mips/kernel/proc.c b/arch/mips/kernel/proc.c
index 239ae03..54ac39a 100644
--- a/arch/mips/kernel/proc.c
+++ b/arch/mips/kernel/proc.c
@@ -76,6 +76,10 @@ static int show_cpuinfo(struct seq_file *m, void *v)
  if (cpu_has_mmips) seq_printf(m, "%s", " micromips");
  seq_printf(m, "\n");
 
+ if (cpu_has_mmips) {
+ seq_printf(m, "micromips kernel\t: %s\n",
+      (read_c0_config3() & MIPS_CONF3_ISA_OE) ?  "yes" : "no");
+ }
  seq_printf(m, "shadow register sets\t: %d\n",
       cpu_data[n].srsets);
  seq_printf(m, "kscratch registers\t: %d\n",
--
1.7.9.5


Reply | Threaded
Open this post in threaded view
|

[PATCH v99,09/13] MIPS: microMIPS: Work-around for assembler bug.

Steven J. Hill-3
In reply to this post by Steven J. Hill-3
From: "Steven J. Hill" <[hidden email]>

When building a microMIPS instructions only kernel, the linker
complains about ISA mode switches in the .fixup section. We
explicitly add the .insn assembler directive to address this.

Signed-off-by: Steven J. Hill <[hidden email]>
---
 arch/mips/include/asm/uaccess.h |   14 ++++++++++++--
 1 file changed, 12 insertions(+), 2 deletions(-)

diff --git a/arch/mips/include/asm/uaccess.h b/arch/mips/include/asm/uaccess.h
index 3b92efe..d2f99ba 100644
--- a/arch/mips/include/asm/uaccess.h
+++ b/arch/mips/include/asm/uaccess.h
@@ -261,6 +261,7 @@ do { \
  __asm__ __volatile__( \
  "1: " insn " %1, %3 \n" \
  "2: \n" \
+ " .insn \n" \
  " .section .fixup,\"ax\" \n" \
  "3: li %0, %4 \n" \
  " j 2b \n" \
@@ -287,7 +288,9 @@ do { \
  __asm__ __volatile__( \
  "1: lw %1, (%3) \n" \
  "2: lw %D1, 4(%3) \n" \
- "3: .section .fixup,\"ax\" \n" \
+ "3: \n" \
+ " .insn \n" \
+ " .section .fixup,\"ax\" \n" \
  "4: li %0, %4 \n" \
  " move %1, $0 \n" \
  " move %D1, $0 \n" \
@@ -355,6 +358,7 @@ do { \
  __asm__ __volatile__( \
  "1: " insn " %z2, %3 # __put_user_asm\n" \
  "2: \n" \
+ " .insn \n" \
  " .section .fixup,\"ax\" \n" \
  "3: li %0, %4 \n" \
  " j 2b \n" \
@@ -373,6 +377,7 @@ do { \
  "1: sw %2, (%3) # __put_user_asm_ll32 \n" \
  "2: sw %D2, 4(%3) \n" \
  "3: \n" \
+ " .insn \n" \
  " .section .fixup,\"ax\" \n" \
  "4: li %0, %4 \n" \
  " j 3b \n" \
@@ -524,6 +529,7 @@ do { \
  __asm__ __volatile__( \
  "1: " insn " %1, %3 \n" \
  "2: \n" \
+ " .insn \n" \
  " .section .fixup,\"ax\" \n" \
  "3: li %0, %4 \n" \
  " j 2b \n" \
@@ -549,7 +555,9 @@ do { \
  "1: ulw %1, (%3) \n" \
  "2: ulw %D1, 4(%3) \n" \
  " move %0, $0 \n" \
- "3: .section .fixup,\"ax\" \n" \
+ "3: \n" \
+ " .insn \n" \
+ " .section .fixup,\"ax\" \n" \
  "4: li %0, %4 \n" \
  " move %1, $0 \n" \
  " move %D1, $0 \n" \
@@ -616,6 +624,7 @@ do { \
  __asm__ __volatile__( \
  "1: " insn " %z2, %3 # __put_user_unaligned_asm\n" \
  "2: \n" \
+ " .insn \n" \
  " .section .fixup,\"ax\" \n" \
  "3: li %0, %4 \n" \
  " j 2b \n" \
@@ -634,6 +643,7 @@ do { \
  "1: sw %2, (%3) # __put_user_unaligned_asm_ll32 \n" \
  "2: sw %D2, 4(%3) \n" \
  "3: \n" \
+ " .insn \n" \
  " .section .fixup,\"ax\" \n" \
  "4: li %0, %4 \n" \
  " j 3b \n" \
--
1.7.9.5


Reply | Threaded
Open this post in threaded view
|

[PATCH v99,10/13] MIPS: microMIPS: Optimise 'memset' core library function.

Steven J. Hill-3
In reply to this post by Steven J. Hill-3
From: "Steven J. Hill" <[hidden email]>

Optimise 'memset' to use microMIPS instructions and/or optimisations
for binary size reduction. When the microMIPS ISA is not being used,
the library function compiles to the original binary code.

Signed-off-by: Steven J. Hill <[hidden email]>
---
 arch/mips/include/asm/asm.h |    2 ++
 arch/mips/lib/memset.S      |   84 +++++++++++++++++++++++++++----------------
 2 files changed, 56 insertions(+), 30 deletions(-)

diff --git a/arch/mips/include/asm/asm.h b/arch/mips/include/asm/asm.h
index 608cfcf..604788f 100644
--- a/arch/mips/include/asm/asm.h
+++ b/arch/mips/include/asm/asm.h
@@ -296,6 +296,7 @@ symbol = value
 #define LONG_SUBU subu
 #define LONG_L lw
 #define LONG_S sw
+#define LONG_SP swp
 #define LONG_SLL sll
 #define LONG_SLLV sllv
 #define LONG_SRL srl
@@ -318,6 +319,7 @@ symbol = value
 #define LONG_SUBU dsubu
 #define LONG_L ld
 #define LONG_S sd
+#define LONG_SP sdp
 #define LONG_SLL dsll
 #define LONG_SLLV dsllv
 #define LONG_SRL dsrl
diff --git a/arch/mips/lib/memset.S b/arch/mips/lib/memset.S
index 606c8a9..cf63df8 100644
--- a/arch/mips/lib/memset.S
+++ b/arch/mips/lib/memset.S
@@ -5,7 +5,8 @@
  *
  * Copyright (C) 1998, 1999, 2000 by Ralf Baechle
  * Copyright (C) 1999, 2000 Silicon Graphics, Inc.
- * Copyright (C) 2007  Maciej W. Rozycki
+ * Copyright (C) 2007 by Maciej W. Rozycki
+ * Copyright (C) 2011, 2012 MIPS Technologies, Inc.
  */
 #include <asm/asm.h>
 #include <asm/asm-offsets.h>
@@ -19,6 +20,20 @@
 #define LONG_S_R sdr
 #endif
 
+#ifdef CONFIG_CPU_MICROMIPS
+#define STORSIZE (LONGSIZE * 2)
+#define STORMASK (STORSIZE - 1)
+#define FILL64RG t8
+#define FILLPTRG t7
+#undef  LONG_S
+#define LONG_S LONG_SP
+#else
+#define STORSIZE LONGSIZE
+#define STORMASK LONGMASK
+#define FILL64RG a1
+#define FILLPTRG t0
+#endif
+
 #define EX(insn,reg,addr,handler) \
 9: insn reg, addr; \
  .section __ex_table,"a"; \
@@ -26,23 +41,25 @@
  .previous
 
  .macro f_fill64 dst, offset, val, fixup
- EX(LONG_S, \val, (\offset +  0 * LONGSIZE)(\dst), \fixup)
- EX(LONG_S, \val, (\offset +  1 * LONGSIZE)(\dst), \fixup)
- EX(LONG_S, \val, (\offset +  2 * LONGSIZE)(\dst), \fixup)
- EX(LONG_S, \val, (\offset +  3 * LONGSIZE)(\dst), \fixup)
- EX(LONG_S, \val, (\offset +  4 * LONGSIZE)(\dst), \fixup)
- EX(LONG_S, \val, (\offset +  5 * LONGSIZE)(\dst), \fixup)
- EX(LONG_S, \val, (\offset +  6 * LONGSIZE)(\dst), \fixup)
- EX(LONG_S, \val, (\offset +  7 * LONGSIZE)(\dst), \fixup)
-#if LONGSIZE == 4
- EX(LONG_S, \val, (\offset +  8 * LONGSIZE)(\dst), \fixup)
- EX(LONG_S, \val, (\offset +  9 * LONGSIZE)(\dst), \fixup)
- EX(LONG_S, \val, (\offset + 10 * LONGSIZE)(\dst), \fixup)
- EX(LONG_S, \val, (\offset + 11 * LONGSIZE)(\dst), \fixup)
- EX(LONG_S, \val, (\offset + 12 * LONGSIZE)(\dst), \fixup)
- EX(LONG_S, \val, (\offset + 13 * LONGSIZE)(\dst), \fixup)
- EX(LONG_S, \val, (\offset + 14 * LONGSIZE)(\dst), \fixup)
- EX(LONG_S, \val, (\offset + 15 * LONGSIZE)(\dst), \fixup)
+ EX(LONG_S, \val, (\offset +  0 * STORSIZE)(\dst), \fixup)
+ EX(LONG_S, \val, (\offset +  1 * STORSIZE)(\dst), \fixup)
+ EX(LONG_S, \val, (\offset +  2 * STORSIZE)(\dst), \fixup)
+ EX(LONG_S, \val, (\offset +  3 * STORSIZE)(\dst), \fixup)
+#if ((defined(CONFIG_CPU_MICROMIPS) && (LONGSIZE == 4)) || !defined(CONFIG_CPU_MICROMIPS))
+ EX(LONG_S, \val, (\offset +  4 * STORSIZE)(\dst), \fixup)
+ EX(LONG_S, \val, (\offset +  5 * STORSIZE)(\dst), \fixup)
+ EX(LONG_S, \val, (\offset +  6 * STORSIZE)(\dst), \fixup)
+ EX(LONG_S, \val, (\offset +  7 * STORSIZE)(\dst), \fixup)
+#endif
+#if (!defined(CONFIG_CPU_MICROMIPS) && (LONGSIZE == 4))
+ EX(LONG_S, \val, (\offset +  8 * STORSIZE)(\dst), \fixup)
+ EX(LONG_S, \val, (\offset +  9 * STORSIZE)(\dst), \fixup)
+ EX(LONG_S, \val, (\offset + 10 * STORSIZE)(\dst), \fixup)
+ EX(LONG_S, \val, (\offset + 11 * STORSIZE)(\dst), \fixup)
+ EX(LONG_S, \val, (\offset + 12 * STORSIZE)(\dst), \fixup)
+ EX(LONG_S, \val, (\offset + 13 * STORSIZE)(\dst), \fixup)
+ EX(LONG_S, \val, (\offset + 14 * STORSIZE)(\dst), \fixup)
+ EX(LONG_S, \val, (\offset + 15 * STORSIZE)(\dst), \fixup)
 #endif
  .endm
 
@@ -71,16 +88,20 @@ LEAF(memset)
 1:
 
 FEXPORT(__bzero)
- sltiu t0, a2, LONGSIZE /* very small region? */
+ sltiu t0, a2, STORSIZE /* very small region? */
  bnez t0, .Lsmall_memset
- andi t0, a0, LONGMASK /* aligned? */
+ andi t0, a0, STORMASK /* aligned? */
 
+#ifdef CONFIG_CPU_MICROMIPS
+ move t8, a1 /* used by 'swp' instruction */
+ move t9, a1
+#endif
 #ifndef CONFIG_CPU_DADDI_WORKAROUNDS
  beqz t0, 1f
- PTR_SUBU t0, LONGSIZE /* alignment in bytes */
+ PTR_SUBU t0, STORSIZE /* alignment in bytes */
 #else
  .set noat
- li AT, LONGSIZE
+ li AT, STORSIZE
  beqz t0, 1f
  PTR_SUBU t0, AT /* alignment in bytes */
  .set at
@@ -99,24 +120,27 @@ FEXPORT(__bzero)
 1: ori t1, a2, 0x3f /* # of full blocks */
  xori t1, 0x3f
  beqz t1, .Lmemset_partial /* no block to fill */
- andi t0, a2, 0x40-LONGSIZE
+ andi t0, a2, 0x40-STORSIZE
 
  PTR_ADDU t1, a0 /* end address */
  .set reorder
 1: PTR_ADDIU a0, 64
  R10KCBARRIER(0(ra))
- f_fill64 a0, -64, a1, .Lfwd_fixup
+ f_fill64 a0, -64, FILL64RG, .Lfwd_fixup
  bne t1, a0, 1b
  .set noreorder
 
 .Lmemset_partial:
  R10KCBARRIER(0(ra))
  PTR_LA t1, 2f /* where to start */
+#ifdef CONFIG_CPU_MICROMIPS
+ LONG_SRL t7, t0, 1
+#endif
 #if LONGSIZE == 4
- PTR_SUBU t1, t0
+ PTR_SUBU t1, FILLPTRG
 #else
  .set noat
- LONG_SRL AT, t0, 1
+ LONG_SRL AT, FILLPTRG, 1
  PTR_SUBU t1, AT
  .set at
 #endif
@@ -126,9 +150,9 @@ FEXPORT(__bzero)
  .set push
  .set noreorder
  .set nomacro
- f_fill64 a0, -64, a1, .Lpartial_fixup /* ... but first do longs ... */
+ f_fill64 a0, -64, FILL64RG, .Lpartial_fixup /* ... but first do longs ... */
 2: .set pop
- andi a2, LONGMASK /* At most one long to go */
+ andi a2, STORMASK /* At most one long to go */
 
  beqz a2, 1f
  PTR_ADDU a0, a2 /* What's left */
@@ -169,7 +193,7 @@ FEXPORT(__bzero)
 
 .Lpartial_fixup:
  PTR_L t0, TI_TASK($28)
- andi a2, LONGMASK
+ andi a2, STORMASK
  LONG_L t0, THREAD_BUADDR(t0)
  LONG_ADDU a2, t1
  jr ra
@@ -177,4 +201,4 @@ FEXPORT(__bzero)
 
 .Llast_fixup:
  jr ra
- andi v1, a2, LONGMASK
+ andi v1, a2, STORMASK
--
1.7.9.5


Reply | Threaded
Open this post in threaded view
|

[PATCH v99,11/13] MIPS: microMIPS: Optimise 'strncpy' core library function.

Steven J. Hill-3
In reply to this post by Steven J. Hill-3
From: "Steven J. Hill" <[hidden email]>

Optimise 'strncpy' to use microMIPS instructions and/or optimisations
for binary size reduction. When the microMIPS ISA is not being used,
the library function compiles to the original binary code.

Signed-off-by: Steven J. Hill <[hidden email]>
---
 arch/mips/lib/strncpy_user.S |   28 +++++++++++++++-------------
 1 file changed, 15 insertions(+), 13 deletions(-)

diff --git a/arch/mips/lib/strncpy_user.S b/arch/mips/lib/strncpy_user.S
index 7201b2f..dea9304 100644
--- a/arch/mips/lib/strncpy_user.S
+++ b/arch/mips/lib/strncpy_user.S
@@ -3,7 +3,8 @@
  * License.  See the file "COPYING" in the main directory of this archive
  * for more details.
  *
- * Copyright (c) 1996, 1999 by Ralf Baechle
+ * Copyright (C) 1996, 1999 by Ralf Baechle
+ * Copyright (C) 2011 MIPS Technologies, Inc.
  */
 #include <linux/errno.h>
 #include <asm/asm.h>
@@ -33,22 +34,23 @@ LEAF(__strncpy_from_user_asm)
  bnez v0, .Lfault
 
 FEXPORT(__strncpy_from_user_nocheck_asm)
- move v0, zero
- move v1, a1
  .set noreorder
-1: EX(lbu, t0, (v1), .Lfault)
+ move t0, zero
+ move v1, a1
+1: EX(lbu, v0, (v1), .Lfault)
  PTR_ADDIU v1, 1
  R10KCBARRIER(0(ra))
- beqz t0, 2f
- sb t0, (a0)
- PTR_ADDIU v0, 1
- .set reorder
- PTR_ADDIU a0, 1
- bne v0, a2, 1b
-2: PTR_ADDU t0, a1, v0
- xor t0, a1
- bltz t0, .Lfault
+ beqz v0, 2f
+ sb v0, (a0)
+ PTR_ADDIU t0, 1
+ bne t0, a2, 1b
+ PTR_ADDIU a0, 1
+2: PTR_ADDU v0, a1, t0
+ xor v0, a1
+ bltz v0, .Lfault
+ nop
  jr ra # return n
+ move v0, t0
  END(__strncpy_from_user_asm)
 
 .Lfault: li v0, -EFAULT
--
1.7.9.5


Reply | Threaded
Open this post in threaded view
|

[PATCH v99,12/13] MIPS: microMIPS: Optimise 'strlen' core library function.

Steven J. Hill-3
In reply to this post by Steven J. Hill-3
From: "Steven J. Hill" <[hidden email]>

Optimise 'strlen' to use microMIPS instructions and/or optimisations
for binary size reduction. When the microMIPS ISA is not being used,
the library function compiles to the original binary code.

Signed-off-by: Steven J. Hill <[hidden email]>
---
 arch/mips/lib/strlen_user.S |    9 +++++----
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/arch/mips/lib/strlen_user.S b/arch/mips/lib/strlen_user.S
index fdbb970..e362dcd 100644
--- a/arch/mips/lib/strlen_user.S
+++ b/arch/mips/lib/strlen_user.S
@@ -3,8 +3,9 @@
  * License.  See the file "COPYING" in the main directory of this archive
  * for more details.
  *
- * Copyright (c) 1996, 1998, 1999, 2004 by Ralf Baechle
- * Copyright (c) 1999 Silicon Graphics, Inc.
+ * Copyright (C) 1996, 1998, 1999, 2004 by Ralf Baechle
+ * Copyright (C) 1999 Silicon Graphics, Inc.
+ * Copyright (C) 2011 MIPS Technologies, Inc.
  */
 #include <asm/asm.h>
 #include <asm/asm-offsets.h>
@@ -28,9 +29,9 @@ LEAF(__strlen_user_asm)
 
 FEXPORT(__strlen_user_nocheck_asm)
  move v0, a0
-1: EX(lb, t0, (v0), .Lfault)
+1: EX(lbu, v1, (v0), .Lfault)
  PTR_ADDIU v0, 1
- bnez t0, 1b
+ bnez v1, 1b
  PTR_SUBU v0, a0
  jr ra
  END(__strlen_user_asm)
--
1.7.9.5


Reply | Threaded
Open this post in threaded view
|

[PATCH v99,13/13] MIPS: microMIPS: Optimise 'strnlen' core library function.

Steven J. Hill-3
In reply to this post by Steven J. Hill-3
From: "Steven J. Hill" <[hidden email]>

Optimise 'strnlen' to use microMIPS instructions and/or optimisations
for binary size reduction. When the microMIPS ISA is not being used,
the library function compiles to the original binary code.

Signed-off-by: Steven J. Hill <[hidden email]>
---
 arch/mips/lib/strnlen_user.S |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/mips/lib/strnlen_user.S b/arch/mips/lib/strnlen_user.S
index 6445716..c5bdf8b 100644
--- a/arch/mips/lib/strnlen_user.S
+++ b/arch/mips/lib/strnlen_user.S
@@ -35,7 +35,7 @@ FEXPORT(__strnlen_user_nocheck_asm)
  PTR_ADDU a1, a0 # stop pointer
 1: beq v0, a1, 1f # limit reached?
  EX(lb, t0, (v0), .Lfault)
- PTR_ADDU v0, 1
+ PTR_ADDIU v0, 1
  bnez t0, 1b
 1: PTR_SUBU v0, a0
  jr ra
--
1.7.9.5


Reply | Threaded
Open this post in threaded view
|

Re: [PATCH v99,01/13] MIPS: microMIPS: Add support for microMIPS instructions.

Kevin Cernekee-3
In reply to this post by Steven J. Hill-3
On Thu, Dec 6, 2012 at 9:05 PM, Steven J. Hill <[hidden email]> wrote:

> @@ -267,6 +268,225 @@ struct b_format { /* BREAK and SYSCALL */
>   unsigned int func:6;
>  };
>
> +struct fb_format { /* FPU branch format */
> + unsigned int opcode:6;
> + unsigned int bc:5;
> + unsigned int cc:3;
> + unsigned int flag:2;
> + unsigned int simmediate:16;
> +};

Some random thoughts/nitpicks on this section:

The microMIPS patch nearly quadruples the number of instruction
formats in the mips_instruction union, so it might be worth
considering questions like:

1) Is this the optimal way to represent this information, or have we
reached a point where it is worth adding more complex "infrastructure"
that would support a more compact instruction definition format?

2) Is there a better way to handle the LE/BE bitfield problem, than to
duplicate each of the 28+ structs?

On the nitpick front:

3) After "struct NAME_format {", are tabs or spaces used to offset the
comment?  Seems like a mix.  (The original code isn't entirely
consistent either, but this could be an opportunity to tidy it up.)

4) Spaces around ':' in e.g. "opcode:6" would be more consistent with
"most" of the other entries in inst.h

5) Should "FPU multipy" be "FPU multiply"?

6) The names of the special MIPS16e structs (rr, jal, i64, ri*) are a
little terse and may create a conflict someday.  If you don't want to
use "INSTR_format" for specific instruction names, you could use
something like "INSTR_instr" instead.

> +struct jal {
> + unsigned int opcode:5;
> + unsigned int x:1;
> + unsigned int imm20_16:5;
> + signed int imm25_21:5;
> + /* unsigned int    imm20_15:0;  here is only first 16bits in first HW */

I'm assuming this meant to say: "there are only 16 bits in the first halfword"?

It might be clearer to just leave a comment like:

+ signed int imm25_21:5;
+ /* the subsequent [or previous] halfword contains imm15_0 */

> +/*
> + * This functions returns 1 if the microMIPS instr is a 16 bit instr.

Suggest "This function returns"

> + * Otherwise return 0.
> + */
> +#define MIPS_ISA_MODE   01
> +#define is16mode(regs)  (regs->cp0_epc & MIPS_ISA_MODE)
> +
> +static inline int mm_is16bit(u16 instr)

Does the comment refer to the is16mode() macro, or to mm_is16bit()?

Does is16mode(), which tests EPC during exception handling, belong in
the uasm header file?

You might want to indicate that the value passed into mm_is16bit() is
either a complete 16-bit MM (microMIPS) instruction, or the most
significant halfword of a 32-bit MM instruction.  i.e. it isn't
necessarily a complete instruction

> + { insn_bltzl, 0, 0 },
> + { insn_bne, M(mm_bne32_op, 0, 0, 0, 0, 0), RT | RS | BIMM },
> + { insn_cache, M(mm_pool32b_op, 0, 0, mm_cache_func, 0, 0), RT | RS | SIMM },
> + { insn_daddu, 0, 0 },
> + { insn_daddiu, 0, 0 },
> + { insn_dmfc0, 0, 0 },

Do the "{ insn_X, 0, 0 }" entries indicate that these instructions
(which were defined in MIPS mode) are unsupported in MM mode?

> +static inline __uasminit u32 build_bimm(s32 arg)
> +{
> + if(arg > 0xffff || arg < -0x10000)

"if(" triggers a checkpatch violation (there are a few others too).

> + printk(KERN_WARNING "Micro-assembler field overflow\n");

Consider pr_warning()

> +static inline __uasminit u32 build_jimm(u32 arg)
> +{
> + if ((arg & ~(JIMM_MASK << 1)) - 1)
> + printk(KERN_WARNING "Micro-assembler field overflow\n");

This expression evaluates to -1 (i.e. print a warning) for small
values of arg, like 0 or 4.

Would something like this work?

arg >>= 1;
if (arg & ~JIMM_MASK)
        pr_warning("Micro-assembler field overflow\n");
return arg & JIMM_MASK;

> +/*
> + * The order of opcode arguments is implicitly left to right,
> + * starting with RS and ending with FUNC or IMM.
> + */
> +static void __uasminit build_insn(u32 **buf, enum opcode opc, ...)

There are a lot of similarities between the MM and MIPS versions of
these functions.  Likewise for build_bimm() and build_jimm(), which
only differ because the shifts/ranges are not the same.  Is there a
way to make better reuse of the code?

> +#ifdef CONFIG_CPU_LITTLE_ENDIAN
> + **buf = ((op & 0xffff) << 16) | (op >> 16);
> +#else
> + **buf = op;
> +#endif

If the MM instruction stream can consist of either 16-bit or 32-bit
instructions, shouldn't this be a "u16 **" pointer?

And if it is, does that make the LE/BE test unnecessary?

> diff --git a/arch/mips/mm/uasm-mips.c b/arch/mips/mm/uasm-mips.c
> new file mode 100644
> index 0000000..e86334b
> --- /dev/null
> +++ b/arch/mips/mm/uasm-mips.c

It would be good to have a separate commit that JUST splits uasm.c out
into uasm-mips.c (no other changes).  The commit message would ideally
explain the rationale.

> +/*
> + * This file is subject to the terms and conditions of the GNU General Public
> + * License.  See the file "COPYING" in the main directory of this archive
> + * for more details.
> + *
> + * Copyright (C) 2012 MIPS Technologies, Inc.  All rights reserved.
> + */

Suggest leaving the original uasm.c authors' copyright info in the
uasm-mips.c header.

> +#ifdef CONFIG_CPU_MICROMIPS
> +#define RS_SH 16
> +#define RT_SH 21
> +#define SCIMM_MASK 0x3ff
> +#define SCIMM_SH 16

Can these be defined in a single place?

> - WARN(arg & ~RS_MASK, KERN_WARNING "Micro-assembler field overflow\n");
> + if (arg & ~RS_MASK)
> + printk(KERN_WARNING "Micro-assembler RS field overflow\n");

Since this looks unrelated to MM support, it might be best to put it
in a separate commit.  What is the benefit of changing WARN() to
printk()?

Also, if printk() absolutely must be used, it might make sense to
consolidate the uasm warning prints into a single non-inlined
function, so that if somebody sees an overflow message and wants to
debug the problem, they can set one breakpoint instead of 10+
breakpoints.  WARN() may have helped indicate the source of the
failure but printk() won't.

> - WARN(arg & ~RT_MASK, KERN_WARNING "Micro-assembler field overflow\n");
> + if (arg & ~RT_MASK)
> + printk(KERN_WARNING "Micro-assembler RT field overflow\n");

FWIW, using a unique string for each error case means the compiler can
no longer point all of these printk's to a single copy of the same
string...

> +#ifdef CONFIG_CPU_MICROMIPS
> +#include "uasm-micromips.c"
> +#else
> +#include "uasm-mips.c"
> +#endif

There's an awful lot of potential reuse between these two
configurations and I'm not sure if it makes sense to split them this
way.

If possible it would be good if we didn't have to enable
CONFIG_CPU_MICROMIPS to know that the MM code compiles.

Reply | Threaded
Open this post in threaded view
|

Re: [PATCH v99,02/13] MIPS: Whitespace clean-ups after microMIPS additions.

Kevin Cernekee-3
In reply to this post by Steven J. Hill-3
On Thu, Dec 6, 2012 at 9:05 PM, Steven J. Hill <[hidden email]> wrote:
> From: "Steven J. Hill" <[hidden email]>
>
> Clean-up tabs, spaces, macros, etc. after adding in microMIPS
> instructions for the micro-assembler.

My personal preference would be to fix up the whitespace in the
existing code first, then make the new (MM) code follow the convention
from the get-go.

> -struct fp0_format {      /* FPU multipy and add format (MIPS32) */
> +struct fp0_format {    /* FPU multipy and add format (MIPS32) */

"multiply"

> --- a/arch/mips/kernel/proc.c
> +++ b/arch/mips/kernel/proc.c
> @@ -73,6 +73,7 @@ static int show_cpuinfo(struct seq_file *m, void *v)
>         if (cpu_has_dsp)        seq_printf(m, "%s", " dsp");
>         if (cpu_has_dsp2)       seq_printf(m, "%s", " dsp2");
>         if (cpu_has_mipsmt)     seq_printf(m, "%s", " mt");
> +       if (cpu_has_mmips)      seq_printf(m, "%s", " micromips");
>         seq_printf(m, "\n");

This should probably go into a different commit.

Reply | Threaded
Open this post in threaded view
|

Re: [PATCH v99,01/13] MIPS: microMIPS: Add support for microMIPS instructions.

Ralf Baechle DL5RB
In reply to this post by Kevin Cernekee-3
On Thu, Dec 06, 2012 at 11:50:10PM -0800, Kevin Cernekee wrote:

> Some random thoughts/nitpicks on this section:
>
> The microMIPS patch nearly quadruples the number of instruction
> formats in the mips_instruction union, so it might be worth
> considering questions like:
>
> 1) Is this the optimal way to represent this information, or have we
> reached a point where it is worth adding more complex "infrastructure"
> that would support a more compact instruction definition format?
>
> 2) Is there a better way to handle the LE/BE bitfield problem, than to
> duplicate each of the 28+ structs?

Something based on #defines, for example.  Back in the dark ages I
figured bitfields would be nicer way to represent instruction formats.
Against the warning words of I think Kevin Kissel I went for the bitfields
and this would be a good opportunity to change direction.

  Ralf